Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 797743
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 14, 20262026-05-14T22:51:27+00:00 2026-05-14T22:51:27+00:00

I am looking to replace from a large document all high unicode characters, such

  • 0

I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with “normal” counterparts in the low range, such as a regular ‘E’, and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt

Is there a fast way of doing this in Python without using s.replace(…).replace(…).replace(…)…? I’ve tried this on just a few characters to replace and the document stripping became really slow.

EDIT, my version of unutbu’s code that doesn’t seem to work:

# -*- coding: iso-8859-15 -*-
import unidecode
def ascii_map():
    data={}
    for num in range(256):
        h=num
        filename='x{num:02x}'.format(num=num)
        try:
            mod = __import__('unidecode.'+filename,
                             fromlist=True)
        except ImportError:
            pass
        else:
            for l,val in enumerate(mod.data):
                i=h<<8
                i+=l
                if i >= 0x80:
                    data[i]=unicode(val)
    return data

if __name__=='__main__':
    s = u'“fancy“fancy2'
    print(s.translate(ascii_map()))
  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-14T22:51:28+00:00Added an answer on May 14, 2026 at 10:51 pm
    # -*- encoding: utf-8 -*-
    import unicodedata
    
    def shoehorn_unicode_into_ascii(s):
        return unicodedata.normalize('NFKD', s).encode('ascii','ignore')
    
    if __name__=='__main__':
        s = u"éèêàùçÇ"
        print(shoehorn_unicode_into_ascii(s))
        # eeeaucC
    

    Note, as @Mark Tolonen kindly points out, the method above removes some characters like
    ß‘’“”. If the above code truncates characters that you wish translated, then you may have to use the string’s translate method to manually fix these problems. Another option is to use unidecode (see J.F. Sebastian’s answer).

    When you have a large unicode string, using its translate method will be much
    much faster than using the replace method.

    Edit: unidecode has a more complete mapping of unicode codepoints to ascii.
    However, unidecode.unidecode loops through the string character-by-character (in a Python loop), which is slower than using the translate method.

    The following helper function uses unidecode‘s data files, and the translate method to attain better speed, especially for long strings.

    In my tests on 1-6 MB text files, using ascii_map is about 4-6 times faster than unidecode.unidecode.

    # -*- coding: utf-8 -*-
    import unidecode
    def ascii_map():
        data={}
        for num in range(256):
            h=num
            filename='x{num:02x}'.format(num=num)
            try:
                mod = __import__('unidecode.'+filename,
                                 fromlist=True)
            except ImportError:
                pass
            else:
                for l,val in enumerate(mod.data):
                    i=h<<8
                    i+=l
                    if i >= 0x80:
                        data[i]=unicode(val)
        return data
    
    if __name__=='__main__':
        s = u"éèêàùçÇ"
        print(s.translate(ascii_map()))
        # eeeaucC
    

    Edit2: Rhubarb, if # -*- encoding: utf-8 -*- is causing a SyntaxError, try
    # -*- encoding: cp1252 -*-. What encoding to declare depends on what encoding your text editor uses to save the file. Linux tends to use utf-8, and (it seems perhaps) Windows tends to cp1252.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 515k
  • Answers 515k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer I would go with the latter approach, i.e. by allowing… May 16, 2026 at 6:26 pm
  • Editorial Team
    Editorial Team added an answer The closest solution i found was using SQL Analysis Services… May 16, 2026 at 6:26 pm
  • Editorial Team
    Editorial Team added an answer Others have already mentioned the XmlReader class for doing the… May 16, 2026 at 6:26 pm

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Related Questions

I am looking to develop/locate a find-replace tool for XPS documents. Does anyone know
I am in the process of converting a large J2EE app (called AeApp below)
Is there a idiomatic way of removing elements from PATH-like shell variables? That is
I just want to SELECT values into variables from inside a procedure. SELECT blah1,blah2
I am looking to take a map<char, vector<char> > and generate each possible map<char,
We run a nightly process that generates a large number (~8000) of reports each
In one of my current side projects, I am scanning through some text looking
I have a huge database which holds pairs of numbers (A,B), each ranging from
I need a function which takes an arbitrary number of arguments (All of the
I've been creating this banner: [url removed] which accesses an xml document, replaces some

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.