Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 1102729
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 17, 20262026-05-17T01:13:51+00:00 2026-05-17T01:13:51+00:00

I’ve read the papers linked to in this question . I half get it.

  • 0

I’ve read the papers linked to in this question. I half get it. Can someone help me figure out how to implement it?

I assume that features are generated by some heuristic. Using a POS-tagger as an example; Maybe looking at training data shows that 'bird' is tagged with NOUN in all cases, so feature f1(z_(n-1),z_n,X,n) is generated as

(if x_n = 'bird' and z_n = NOUN then 1 else 0)

Where X is the input vector and Z is the output vector. During training for weights, we find that this f1 is never violated, so corresponding weight \1 (\ for lambda) would end up positive and relatively large after training. Both guessing features and training seem challenging implement, but otherwise straightforward.

I’m lost on how one applies the model to untagged data. Initialize the output vector with some arbitrary labels, and then change labels where they increase the sum over all the \ * f?

Any help on this would be greatly appreciated.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-17T01:13:52+00:00Added an answer on May 17, 2026 at 1:13 am

    I am not completely sure if i understand you correctly but yes, on the output side each vector is augmented with a start and end symbol.

    You are also right about feature functions being generated by some heuristic. Usually the heuristic is to take all possible combinations. In your example there would be a feature function for each pair of (word,tag) resulting in a high number of features. A common way to formulate such features is through the use of a feature template.

    When evaluating the model you don’t care about normalization, so you are looking for the sequence that gives you the largest numerator term. Usually the Viterbi algorithm is used to do so except for very large label sets – or in your example a large number of possible tags – in which case approximations are used.

    Viterbi on CRFs works much like with HMMs. You start at the beginning of your sequence and compute the maximum probability ending with the word at hand, i.e. the maximum for each word over all predecessors or, as there is only one predecessor, the START symbol. In the next step you iterate over all labels, that are possible for the second element of your prediction i.e. z_2. The maximum of the unnormalized probability may be computed from both the values a the predecessor nodes, i.e. the values you computed in the first step and your model. In particular you combine the potentials of the predecessor, the transition to the node in question and the node itself and find the maximum over all predecessors. And yes since the feature functions does not limit the dependence on the source side you may take any information from it.

    When you arrive at the end you walk back to determine how the maximum has been reached.

    For further reading i recommend the report by Rahul Gupta.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.