Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 891965
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 15, 20262026-05-15T13:58:30+00:00 2026-05-15T13:58:30+00:00

This is a concurrent queue I wrote which I plan on using in a

  • 0

This is a concurrent queue I wrote which I plan on using in a thread pool I’m writing. I’m wondering if there are any performance improvements I could make. atomic_counter is pasted below if you’re curious!

#ifndef NS_CONCURRENT_QUEUE_HPP_INCLUDED
#define NS_CONCURRENT_QUEUE_HPP_INCLUDED

#include <ns/atomic_counter.hpp>
#include <boost/noncopyable.hpp>
#include <boost/smart_ptr/detail/spinlock.hpp>
#include <cassert>
#include <cstddef>

namespace ns {
    template<typename T,
             typename mutex_type = boost::detail::spinlock,
             typename scoped_lock_type = typename mutex_type::scoped_lock>
    class concurrent_queue : boost::noncopyable {
        struct node {
            node * link;
            T const value;
            explicit node(T const & source) : link(0), value(source) { }
        };
        node * m_front;
        node * m_back;
        atomic_counter m_counter;
        mutex_type m_mutex;
    public:
        // types
        typedef T value_type;

        // construction
        concurrent_queue() : m_front(0), m_mutex() { }
        ~concurrent_queue() { clear(); }

        // capacity
        std::size_t size() const { return m_counter; }
        bool empty() const { return (m_counter == 0); }

        // modifiers
        void push(T const & source);
        bool try_pop(T & destination);
        void clear();
    };

    template<typename T, typename mutex_type, typename scoped_lock_type>
    void concurrent_queue<T, mutex_type, scoped_lock_type>::push(T const & source) {
        node * hold = new node(source);
        scoped_lock_type lock(m_mutex);
        if (empty())
            m_front = hold;
        else
            m_back->link = hold;
        m_back = hold;
        ++m_counter;
    }

    template<typename T, typename mutex_type, typename scoped_lock_type>
    bool concurrent_queue<T, mutex_type, scoped_lock_type>::try_pop(T & destination) {
        node const * hold;
        {
            scoped_lock_type lock(m_mutex);
            if (empty())
                return false;
            hold = m_front;
            if (m_front == m_back)
                m_front = m_back = 0;
            else
                m_front = m_front->link;
            --m_counter;
        }
        destination = hold->value;
        delete hold;
        return true;
    }

    template<typename T, typename mutex_type, typename scoped_lock_type>
    void concurrent_queue<T, mutex_type, scoped_lock_type>::clear() {
        node * hold;
        {
            scoped_lock_type lock(m_mutex);
            hold = m_front;
            m_front = 0;
            m_back = 0;
            m_counter = 0;
        }
        if (hold == 0)
            return;
        node * it;
        while (hold != 0) {
            it = hold;
            hold = hold->link;
            delete it;
        }
    }
}

#endif

atomic_counter.hpp

#ifndef NS_ATOMIC_COUNTER_HPP_INCLUDED
#define NS_ATOMIC_COUNTER_HPP_INCLUDED

#include <boost/interprocess/detail/atomic.hpp>
#include <boost/noncopyable.hpp>

namespace ns {
    class atomic_counter : boost::noncopyable {
        volatile boost::uint32_t m_count;
    public:
        explicit atomic_counter(boost::uint32_t value = 0) : m_count(value) { }

        operator boost::uint32_t() const {
            return boost::interprocess::detail::atomic_read32(const_cast<volatile boost::uint32_t *>(&m_count));
        }

        void operator=(boost::uint32_t value) {
            boost::interprocess::detail::atomic_write32(&m_count, value);
        }

        void operator++() {
            boost::interprocess::detail::atomic_inc32(&m_count);
        }

        void operator--() {
            boost::interprocess::detail::atomic_dec32(&m_count);
        }
    };
}

#endif
  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-15T13:58:30+00:00Added an answer on May 15, 2026 at 1:58 pm

    I think you will run into performance problems with a linked list in this case because of calling new for each new node. And this isn’t just because calling the dynamic memory allocator is slow. It’s because calling it frequently introduces a lot of concurrency overhead because the free store has to be kept consistent in a multi-threaded environment.

    I would use a vector that you resize to be larger when it’s too small to hold the queue. I would never resize it smaller.

    I would arrange the front and back values so the vector is a ring buffer. This will require that you move elements when you resize though. But that should be a fairly rare event and can be mitigated to some extent by giving a suggested vector size at construction.

    Alternatively you could keep the linked list structure, but never destroy a node. Just keep adding it to a queue of free nodes. Unfortunately the queue of free nodes is going to require locking to manage properly, and I’m not sure you’re really in a better place than if you called delete and new all the time.

    You will also get better locality of reference with a vector. But I’m not positive how that will interact with the cache lines having to shuttle back and forth between CPUs.

    Some others suggest a ::std::deque and I don’t think that’s a bad idea, but I suspect the ring buffer vector is a better idea.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 433k
  • Answers 433k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer Create a temporary table do the merge and then move… May 15, 2026 at 3:02 pm
  • Editorial Team
    Editorial Team added an answer As mentioned in "Problem with Hudson + Git + Gitosis… May 15, 2026 at 3:02 pm
  • Editorial Team
    Editorial Team added an answer There are XML minifiers, but for compression, XML is just… May 15, 2026 at 3:02 pm

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.