Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 276027
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 12, 20262026-05-12T00:51:22+00:00 2026-05-12T00:51:22+00:00

I’m trying to devise a method that will be able to classify a given

  • 0

I’m trying to devise a method that will be able to classify a given number of english words into 2 sets – “rare” and “common” – the reference being to how much they are used in the language.

The number of words I would like to classify is bounded – currently at around 10,000, and include everything from articles, to proper nouns that could be borrowed from other languages (and would thus be classified as “rare”). I’ve done some frequency analysis from within the corpus, and I have a distribution of these words (ranging from 1 use, to tops about 100).

My intuition for such a system was to use word lists (such as the BNC word frequency corpus, wordnet, internal corpus frequency), and assign weights to its occurrence in one of them.

For instance, a word that has a mid level frequency in the corpus, (say 50), but appears in a word list W – can be regarded as common since its one of the most frequent in the entire language. My question was – whats the best way to create a weighted score for something like this? Should I go discrete or continuous? In either case, what kind of a classification system would work best for this?

Or do you recommend an alternative method?

Thanks!


EDIT:

To answer Vinko’s question on the intended use of the classification –

These words are tokenized from a phrase (eg: book title) – and the intent is to figure out a strategy to generate a search query string for the phrase, searching a text corpus. The query string can support multiple parameters such as proximity, etc – so if a word is common, these params can be tweaked.

To answer Igor’s question –

(1) how big is your corpus?
Currently, the list is limited to 10k tokens, but this is just a training set. It could go up to a few 100k once I start testing it on the test set.

2) do you have some kind of expected proportion of common/rare words in the corpus?
Hmm, I do not.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-12T00:51:22+00:00Added an answer on May 12, 2026 at 12:51 am

    Assuming you have a way to evaluate the classification, you can use the “boosting” approach to machine learning. Boosting classifiers use a set of weak classifiers combined to a strong classifier.

    Say, you have your corpus and K external wordlists you can use.
    Pick N frequency thresholds. For example, you may have 10 thresholds: 0.1%, 0.2%, …, 1.0%.
    For your corpus and each of the external word lists, create N “experts”, one expert per threshold per wordlist/corpus, total of N*(K+1) experts. Each expert is a weak classifier, with a very simple rule: if the frequency of the word is higher than its threshold, they consider the word to be “common”. Each expert has a weight.

    The learning process is as follows: assign the weight 1 to each expert. For each word in your corpus, make the experts vote. Sum their votes: 1 * weight(i) for “common” votes and (-1) * weight(i) for “rare” votes. If the result is positive, mark the word as common.

    Now, the overall idea is to evaluate the classification and increase the weight of experts that were right and decrease the weight of the experts that were wrong. Then repeat the process again and again, until your evaluation is good enough.

    The specifics of the weight adjustment depends on the way how you evaluate the classification. For example, if you don’t have per-word evaluation, you may still evaluate the classification as “too many common” or “too many rare” words. In the first case, promote all the pro-“rare” experts and demote all pro-“common” experts, or vice-versa.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 185k
  • Answers 185k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer Singletons have nothing to do with thread-safety. They are here… May 12, 2026 at 4:57 pm
  • Editorial Team
    Editorial Team added an answer Groups are just an organisational tool, and you can use… May 12, 2026 at 4:57 pm
  • Editorial Team
    Editorial Team added an answer Use Isometric Projection! I don't think "real 3D" is needed… May 12, 2026 at 4:57 pm

Related Questions

I'm trying to decode HTML entries from here NYTimes.com and I cannot figure out
I ran into a problem. Wrote the following code snippet: teksti = teksti.Trim() teksti
In order to apply a triggered animation to all ToolTip s in my app,
I have a French site that I want to parse, but am running into
I have text I am displaying in SIlverlight that is coming from a CMS

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.