In a OS we originally have a ready-queue for the threads, note only one queue. Why would it be better to have different queues for each semaphore. Pros and cons can be about efficiency, complexity or something else.
Share
Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.
Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Ok, to answer this question please refer to my answer to another question here on SO, in relation to ‘Does Linux time division processes or threads?’, let’s follow on from that structure of an imaginary kernel here using x86 processor and C code…I have amended this structure to take into account of semaphores
struct regs{ int eax, ebx, ecx, edx, es, ds, gs, fs, cs, ip, flags; struct tss *task_sel; } struct mem_loc{ int *phys_mem_begin; int *phys_mem_end; int *semaphores; } struct thread{ struct regs *regs; struct mem_loc *memlocTracker; int parent_id; struct thread *next; } struct task{ struct regs *regs; struct mem_loc *task_mem; int *filehandles; int priority; int *num_threads; int quantum; int duration; int start_time, end_time; int parent_id; struct thread *task_thread; /* ... */ struct task *next; } struct queue_ready{ struct task *task_id; struct queue_ready *next; }There, I think that looks better, right, in relation to following on from my previous answer linked above, let’s see what happens when there’s queues involved, now, look at this
queue_readystructure, now, supposing there’s a task in the that structure as set up by the kernel, a linked list ofqueue_readyto be more precise, this is based on thepriorityfield of thetaskstructure itself, and the priority is 1 for example, to be ready to executed.Let’s look at an imaginary scheduler based on C, as shown below, ok, it may be crappy and vulnerable to nit-picky, but put that aside to concentrate on the aspect of the question…
static void scheduler(void){ struct queue_ready *rdy_sched; while(1){ for (rdy_sched = head_queue_ready; rdy_sched != NULL; rdy_sched = rdy_sched->next){ if (time >= rdy_sched->task_id->start_time + rdy_sched->task_id->quantum){ save_cpu_task_state(&task_reg->regs); save_mem_task_state(&rdy_sched->task_id->task_mem); }else{ struct regs *task_reg = rdy_sched->task_id->regs; struct mem_loc *mem_task = rdy_sched->task_id->task_mem; load_cpu_task_state(task_reg); load_mem_task_state(mem_task); jmp_and_exec_cpu_task(task_reg->cs, task_reg->ip); save_cpu_task_state(&rdy_sched->task_id->regs); if (rdy_sched->task_id->num_threads > 0){ struct thread *threads = rdy_sched->task_id->task_thread; while (threads->next != NULL){ struct regs *thread_reg = threads->regs; load_cpu_thread_state(thread_reg); if (threads->memlocTracker->semaphores){ /* House keeping in relation to semaphores */ lock_memory(threads->memlocTracker); jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip); save_cpu_thread_state(&thread->regs); unlock_memory(threads->memlocTracker); }else{ jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip); save_cpu_thread_state(&thread->regs); } } } } } } }The kernel’s scheduler looks overly complex, but it isn’t really, the only omission I have left out in the code is the priorities…that can be ignored now for the discussion of the OP’s question.
Let’s break this scheduler function
schedulerdown…but first a quick peek and explanation of what functions are used within theschedulerfunction:phys_mem_beginandphys_mem_endfields of themem_locstructure for the said task. For brevity, this will load all memory owned by the said task, including threads.csandipregisters of the said task’sregstructure, again same for the thread respectively.Now,
schedulerruns forever, and iterates over the linked list of tasks, the check is required to see if the task has run for a period and exceeded thequantumamount, for example, 20ms. Then it saves the cpu state, and state of memory, (perhaps saved to paging file on disk), ready to go on to the next task in the list.If there is still time left, it loads the task’s CPU registers and loads up the memory (perhaps from a page), and jumps to where the last instruction pointer and code segment was (ip and cs respectively) and the task resumes.
If the said task has threads, it will iterate over the linked-list of threads, loading the CPU registers, and jumping into the code to resume thread execution then save the CPU registers.
Now, this is where things gets a bit hairy, especially in this toy OS, how to manage semaphores! uh huh, looking like a big headache looming over the OS developer’s head….
I would imagine if the section of memory is locked, the
semaphoreswould be non NULL, and would have been acquired by the runtime manager in user-land code that requested the semaphore, it would lock the memory to prevent a thread trampling the data, and jump into the code for the said thread and would unlock it when that thread’s duration is completed and saves that state of thread’s CPU registers…Conclusion:
This is my conclusion based on this, as for facts to back me up, I cannot, as this theory has been exhaustively lectured at, examined under the microscope in countless books, ranging from a University in the US, all the way around to the University in New Zealand, this is from my observations and my understanding, as you can see from this code, the complexity and the burden placed on the developer in coding this, not alone that, the constraints imposed, design decisions – both hardware and software.
Managing semaphores in this context is far more complicated, to answer, if there was more queues available, like I have strived to keep it to one queue
queue_ready, the cost factor in terms of memory usage, disk usage and above all time usage, grows as the kernel would have to keep track of those mitigating factors that come into play, another equally important factor is how much priorities is there going to be, so having a ready queue for different semaphores in my opinion is not a viable and feasible option.