Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 844123
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 15, 20262026-05-15T06:14:14+00:00 2026-05-15T06:14:14+00:00

I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory

  • 0

I found this article here:

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf

In the conclusion section, it reads:

Comparing runtime, space consumption,
and virtual memory footprints over a
range of benchmarks, we show that the
runtime performance of the
best-performing garbage collector is
competitive with explicit memory
management when given enough memory.
In particular, when garbage collection
has five times as much memory as
required, its runtime performance
matches or slightly exceeds that of
explicit memory management. However,
garbage collection’s performance
degrades substantially when it must
use smaller heaps.
With three times as
much memory, it runs 17% slower on
average, and with twice as much
memory, it runs 70% slower.
Garbage
collection also is more susceptible to
paging when physical memory is scarce.
In such conditions, all of the garbage
collectors we examine here suffer
order-of-magnitude performance
penalties relative to explicit memory
management.

So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a “managed” (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB?
(And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?)

Do you know if current (i.e. latest Java VM’s and .NET 4.0’s) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved?

Thanks.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-15T06:14:14+00:00Added an answer on May 15, 2026 at 6:14 am

    You seem to be asking two things:

    • have GC’s improved since that research was performed, and
    • can I use the conclusions of the paper as a formula to predict required memory.

    The answer to the first is that there have been no major breakthroughs in GC algorithms that would invalidate the general conclusions:

    • GC’ed memory management still requires significantly more virtual memory.
    • If you try to constrain the heap size the GC performance drops significantly.
    • If real memory is restricted, the GC’ed memory management approach results in substantially worse performance due to paging overheads.

    However, the conclusions cannot really be used as a formula:

    • The original study was done with JikesRVM rather than a Sun JVM.
    • The Sun JVM’s garbage collectors have improved in the ~5 years since the study.
    • The study does not seem to take into account that Java data structures take more space than equivalent C++ data structures for reasons that are not GC related.

    On the last point, I have seen a presentation by someone that talks about Java memory overheads. For instance, it found that the minimum representation size of a Java String is something like 48 bytes. (A String consists of two primitive objects; one an Object with 4 word-sized fields and the other an array with a minimum of 1 word of content. Each primitive object also has 3 or 4 words of overhead.) Java collection data structures similarly use far more memory than people realize.

    These overheads are not GC-related per se. Rather they are direct and indirect consequences of design decisions in the Java language, JVM and class libraries. For example:

    • Each Java primitive object header1 reserves one word for the object’s “identity hashcode” value, and one or more words for representing the object lock.
    • The representation of a String has to use a separate “array of characters” because of JVM limitations. Two of the three other fields are an attempt to make the substring operation less memory intensive.
    • The Java collection types use a lot of memory because collection elements cannot be directly chained. So for example, the overheads of a (hypothetical) singly linked list collection class in Java would be 6 words per list element. By contrast an optimal C/C++ linked list (i.e. with each element having a “next” pointer) has an overhead of one word per list element.

    1 – In fact, the overheads are less than this on average. The JVM only “inflates” a lock following use & contention, and similar tricks are used for the identity hashcode. The fixed overhead is only a few bits. However, these bits add up to a measurably larger object header … which is the real point here.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 406k
  • Answers 406k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer I assume you need the results of LocationA processing for… May 15, 2026 at 6:14 am
  • Editorial Team
    Editorial Team added an answer I don't believe it's possible, but I don't think you… May 15, 2026 at 6:14 am
  • Editorial Team
    Editorial Team added an answer You need to define the method like this: public <B,… May 15, 2026 at 6:14 am

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.