Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 122637
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 11, 20262026-05-11T04:09:37+00:00 2026-05-11T04:09:37+00:00

I’m working on something that pulls in urls from delicious and then uses those

  • 0

I’m working on something that pulls in urls from delicious and then uses those urls to discover associated feeds.

However, some of the bookmarks in delicious are not html links and cause BS to barf. Basically, I want to throw away a link if BS fetches it and it does not look like html.

Right now, this is what I’m getting.

trillian:Documents jauderho$ ./d2o.py 'green data center'  processing http://www.greenm3.com/ processing http://www.eweek.com/c/a/Green-IT/How-to-Create-an-EnergyEfficient-Green-Data-Center/?kc=rss Traceback (most recent call last):   File './d2o.py', line 53, in <module>     get_feed_links(d_links)   File './d2o.py', line 43, in get_feed_links     soup = BeautifulSoup(html)   File '/Library/Python/2.5/site-packages/BeautifulSoup.py', line 1499, in __init__     BeautifulStoneSoup.__init__(self, *args, **kwargs)   File '/Library/Python/2.5/site-packages/BeautifulSoup.py', line 1230, in __init__     self._feed(isHTML=isHTML)   File '/Library/Python/2.5/site-packages/BeautifulSoup.py', line 1263, in _feed     self.builder.feed(markup)   File '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py', line 108, in feed     self.goahead(0)   File '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py', line 150, in goahead     k = self.parse_endtag(i)   File '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py', line 314, in parse_endtag     self.error('bad end tag: %r' % (rawdata[i:j],))   File '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py', line 115, in error     raise HTMLParseError(message, self.getpos()) HTMLParser.HTMLParseError: bad end tag: u'</b  />', at line 739, column 1 

Update:

Jehiah’s answer does the trick. For reference, here’s some code to get the content type:

def check_for_html(link):     out = urllib.urlopen(link)     return out.info().getheader('Content-Type') 
  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. 2026-05-11T04:09:37+00:00Added an answer on May 11, 2026 at 4:09 am

    I simply wrap my BeautifulSoup processing and look for the HTMLParser.HTMLParseError exception

    import HTMLParser,BeautifulSoup try:     soup = BeautifulSoup.BeautifulSoup(raw_html)     for a in soup.findAll('a'):         href = a.['href']         .... except HTMLParser.HTMLParseError:     print 'failed to parse',url 

    but further than that, you can check the content type of the responses when you crawl a page and make sure that it’s something like text/html or application/xml+xhtml or something like that before you even try to parse it. That should head off most errors.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 72k
  • Answers 72k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • added an answer My suspicion is that your working directory is different from… May 11, 2026 at 1:42 pm
  • added an answer Note that this might be long to test if all… May 11, 2026 at 1:42 pm
  • added an answer You don't need the foreach you could just use this...… May 11, 2026 at 1:42 pm

Related Questions

No related questions found

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.