Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 737973
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 14, 20262026-05-14T07:47:00+00:00 2026-05-14T07:47:00+00:00

I’m developing a firefox plugin and i fetch web pages to do some analysis

  • 0

I’m developing a firefox plugin and i fetch web pages to do some analysis for the user. The problem is when i try to get (XMLHttpRequest) pages that are not utf-8 encoded the string i see is messed up. For example hebrew pages with windows-1125 or Chinese pages with gb2312.

I already tried the following:

var uDecoder=Components.classes["@mozilla.org/intl/scriptableunicodeconverter"].getService(Components.interfaces.nsIScriptableUnicodeConverter);
uDecoder.charset="windows-1255";
alert( xhr.responseText );

var decoder=Components.classes["@mozilla.org/intl/utf8converterservice;1"].getService(Components.interfaces.nsIUTF8ConverterService);

alert(decoder.convertStringToUTF8(xhr.responseText,"WINDOWS-1255",true)); 

I also tried escape/unescape/encodeURIComponent

any ideas???

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-14T07:47:01+00:00Added an answer on May 14, 2026 at 7:47 am

    Once XMLHttpRequest has tried to decode a non-UTF-8 string using UTF-8, you’ve already lost. The byte sequences in the page that weren’t valid UTF-8 sequences will have been mangled (typically converted to �, the U+FFFD replacement character). No amount of re-encoding/decoding will get them back.

    Pages that specify a Content-Type: text/html;charset=something HTTP header should be OK. Pages that don’t have a real HTTP header but do have a <meta> version of it won’t be, because XMLHttpRequest doesn’t know about parsing HTML so it won’t see the meta. If you know in advance the charset you want, you can tell XMLHttpRequest and it’ll use it:

    xhr.open(...);
    xhr.overrideMimeType('text/html;charset=gb2312');
    xhr.send();
    

    (This is a currently non-standardised Mozilla extension.)

    If you don’t know the charset in advance, you can request the page once, hack about with the header for a <meta> charset, parse that out and request again with the new charset.

    In theory you could get a binary response in a single request:

    xhr.overrideMimeType('text/html;charset=iso-8859-1');
    

    and then convert that from bytes-as-chars to UTF-8. However, iso-8859-1 wouldn’t work for this because the browser interprets that charset as really being Windows code page 1252.

    You could maybe use another codepage that maps every byte to a character, and do a load of tedious character replacements to map every character in that codepage to the character it would have been in real-ISO-8859-1, then do the conversion. Most encodings don’t map every byte, but Arabic (cp1256) might be a candidate for this?

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 418k
  • Answers 419k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer I was able to fix this by adding a new… May 15, 2026 at 10:10 am
  • Editorial Team
    Editorial Team added an answer Just noticed that the dll is included the download zip… May 15, 2026 at 10:10 am
  • Editorial Team
    Editorial Team added an answer A very good way of managing this situation, using an… May 15, 2026 at 10:10 am

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.