I have been associated with some WBUT engineering courses for sometime. For example CS 403, Advanced Architecture, CS 704D, Advanced Operating system, IT703D, Distributed Computer Systems and Number of Microprocessor based courses such as EI 502, IC 503, EI 405, EI 611 etc. Currently I am working with UPTU undergraduate and GBTU post graduate courses in CSE and IT. ECS 601 Computer Networking and Distributed Systems, for example.
Thursday, September 30, 2010
Notes-for-WBUT-CSE-&-IT-Courses: Post 1st Class Test and The Puja Holidays
Notes-for-WBUT-CSE-&-IT-Courses: Post 1st Class Test and The Puja Holidays: "There are two things that will be required of my students of CS 704D after the first class test and the Puja holidays. 1. I'd like, as th..."
Post 1st Class Test and The Puja Holidays
There are two things that will be required of my students of CS 704D after the first class test and the Puja holidays.
1. I'd like, as the 1st internal evaluation test of CS 704D, every one to answer questions on what is the process synchronization problem, how one could try to realize the synchronization issue, what are the criteria of a good sync mechanism and how does the semaphore idea implement the concepts properly.
2.Be ready to present their findings on the various OS systems that they chose.Main points to be discussed are what type of OS each is, how does it implement the resource management issues, particularly the process management, multi tasking issues, memory management etc. In case of real-time systems, what are the real differences with others.
1. I'd like, as the 1st internal evaluation test of CS 704D, every one to answer questions on what is the process synchronization problem, how one could try to realize the synchronization issue, what are the criteria of a good sync mechanism and how does the semaphore idea implement the concepts properly.
2.Be ready to present their findings on the various OS systems that they chose.Main points to be discussed are what type of OS each is, how does it implement the resource management issues, particularly the process management, multi tasking issues, memory management etc. In case of real-time systems, what are the real differences with others.
Monday, September 27, 2010
Cs 704 d aos-resource&processmanagement
Check out this SlideShare Presentation:
Cs 704 d aos-resource&processmanagement
View more presentations from Debasis Das.
Sunday, September 19, 2010
[EI 502, IC 503, EI 405, EI 611]Common Mistake about Logical Operations
There is a very common mistake most of us make when working with logical shift/rotate operations. Until you are absolutely conscious, we tend to do logical operations before setting up a conditional jump instruction (jump of zero, jump on non zero etc.). We simply assume that, like in most instructions of a microprocessor, the flags get set because of the logical operation. We expect the zero, minus, sign etc. flags will get set appropriately and we should be able to use that condition for the conditional branch instruction immediately after this operation.
However, the logical shift/rotate operations group of instructions do not affect the flags in the processor (in the 8085 at least, except CY sometimes). Thus if you use a conditional jump/branch instruction after a these logical operations, it'll fail or will cause wrong results. So, watch out! Before you use a conditional jump/branch instruction, check if the instruction you are using for setting the flags actually does it!
However, the logical shift/rotate operations group of instructions do not affect the flags in the processor (in the 8085 at least, except CY sometimes). Thus if you use a conditional jump/branch instruction after a these logical operations, it'll fail or will cause wrong results. So, watch out! Before you use a conditional jump/branch instruction, check if the instruction you are using for setting the flags actually does it!
Thursday, September 16, 2010
[CS 704D] Design Issues of a Distributed OS
We discussed the issues in the class, yet some students wanted me to add a note. I guess, it will have to be about "WHY" of the issues. Why should each of these issues are to be considered! This is the crux of the design issue anyway. We study these issues in details and how to achieve them through the course!
A centralized OS for a computer system contained in one location management of resources are comparatively easier as you can get the status of each item when asked. There is hardly any delay involved (things happen at electronic speed over various kind of buses!). The most bothersome problem with distributed systems is that you depend on the bothersome communication system to command and get information. One is never sure you have the latest information! Without the latest information about the state of the resources, you have difficulty is assigning resources and manage them in general. For example, you have the last information from the n th processor that it is hale & hearty and is ready to execute a process. Yet when you the OS is about to assign the process it may not know, it just went down! So design of a DOS is about making sure you are able to manage the resources and deliver "accurate" processed data. A designer has to get around the issues/problems and deliver accurate results. Designing a distributed system is how the get around the problems effectively.
We discussed in the class that following are the relevant issues.
A centralized OS for a computer system contained in one location management of resources are comparatively easier as you can get the status of each item when asked. There is hardly any delay involved (things happen at electronic speed over various kind of buses!). The most bothersome problem with distributed systems is that you depend on the bothersome communication system to command and get information. One is never sure you have the latest information! Without the latest information about the state of the resources, you have difficulty is assigning resources and manage them in general. For example, you have the last information from the n th processor that it is hale & hearty and is ready to execute a process. Yet when you the OS is about to assign the process it may not know, it just went down! So design of a DOS is about making sure you are able to manage the resources and deliver "accurate" processed data. A designer has to get around the issues/problems and deliver accurate results. Designing a distributed system is how the get around the problems effectively.
We discussed in the class that following are the relevant issues.
- Transparency
- Reliability
- Flexibility
- Performance
- Scalability
- Heterogeneity
- Security
- Emulation of existing OS
Transparency: This has several dimensions that needs to be handled. The topmost transparency requirement is that the geographically distributed system should look like one integrated, monolithic system to the user. Users do not want to be bothered with the details of what resources is located where or to mention how to access the stuff. Other needs for transparency arises from uniformity in addressing the resources by the OS.
As a user I need to access the resources of the system without bothering about where they are. Similarly for location transparency. This involves name transparency and user mobility are the two dimensions. Since we are dealing with a diverse set of systems in such systems, naming should be same, else knowledge about location of the resource would be required. User mobility property lets user move to any machine in the system and be able to log in the same way.
Replication transparency makes sure that even if resources are replicated, the location etc needs to be transparent to the user, system must take care of it. Failures should again be transparent, meaning that even when failures do take place if should not affect users as far as possible ( it is difficult make it completely transparent, quite often). Many time we need to migrate resources, processes but then that should not even be known to user. Concurrency related activities carried out in OS also should be transparent meaning that user need not be aware where or how the processes are getting carried out or how concurrency is being managed.
Performance transparency requires that resources get suitably reconfigured so that reduced performance does not affect user. Worst case, user demands should be met with reduced performance but not anything disastrous. Scaling transparency calls for not affecting users even when resources are added to increase performance of the system.
Reliability: Failure can be fail-soft or Byzantine. Either case we should have means of fault avoidance, ability to tolerate failure and fault detection and recovery. This directly derives from the original requirement that we deliver accurate processing.
Flexibility: Ease of maintenance, ease of enhancement are definitely required. Except that the qualifying clause is "as far as possible". The effort is to reach as close to 100% as possible. That takes quite bit of close design effort though.
Performance: This is always a consideration. We should be able to squeeze out the maximum performance as possible from a given set of resources.
Scalability: As far as possible linear increase in resources should result in linear increase in performance. Thus adding another processor should double performance. As we know, this does not happen but we need as high an increase as possible.
Heterogeneity: We know the distributed system is going to contain all kinds of resources (machines, memory and everything else) but that should not affect performance or the accurate result delivery of the total system.
Security: Since, in the general case, data gets exchanged through networks distributed all over, security is definitely a concern as all three crucial questions of security achieves prominence. Is the sender really the sender he claims to be, is the receiver the right receiver and is the data in pristine form, not been tampered with!
Emulation of older OS: Unless the OS is being developed for the first time, this is a very important. Typically a particular generation(version) of OS would have been used to develop a lot of applications. Changing/upgrading these on the newer version takes time. So, until all these applications are changed for the newer version, you would need the older version to be emulated within the newer one.
That, in a nutshell why these are the design issues that should be addressed when creating the distributed OS.
If, you still have question, please use the comments section to pose them. I'll try mu best to address them as soon as possible.
Wednesday, September 15, 2010
[EI 502, IC 503, EI 405, EI 611] Microprocessor based B Tech projects
If you want to project a differentiation to recruiters, the B Tech project you have done is a major tool to prove that. It is thus essential that you undertake meaningful projects and do it well. Here is a list of projects one can do that uses microprocessors. It is really a long list of meaningful projects.
I find the following the most interesting in the list.
I find the following the most interesting in the list.
- Industrial automation using a microprocessor's parallel ports
- Digital 74 series IC tester
- Electronic voting machine using a microcontroller
- Electronics components tester using a microcontroller
- Finger print security system
- RFID based attendance system
- Telephone controlled device switching using microcontroller
- Digital lock that gets permanently locked if tried wrongly thrice
- PC controlled robot machine
- Infrared remote controller using a microcontroller
- 2 channel oscilloscope based on a PC
- Parallel port testing & programming through microprocessors/controllers
These are just a dozen idea, the list has many more. You can even come up with some variations and get even more ideas!
Tuesday, September 14, 2010
Keeping Track of Notes site
As I mentioned in the class, you could subscribe to the RSS on the site. See the first item on the right side of the site. Click on that and you will be guided through what is to be done. If you are on Facebook, announcements about the posts being published will appear there. If you have a Google account ( if you are a Gmail subscriber), the Buzz will show the posts being published. You'll have to connect to the notes site on buzz, of course. The site URL is wbut-cse-it,blogspot.com
Another way of keeping track is to open an account on twitter and follow me. I'll be publishing the announcements there too.
Another way of keeping track is to open an account on twitter and follow me. I'll be publishing the announcements there too.
Monday, September 13, 2010
[CS 704D] Mutual Exclusion and Trial Algorithms- 3
Let us look at only the relevant portion of the code, look up the rest in Milan Milenkovic's book. We'll look at the p1's code, we know p2 code is similar to this one with the changes with respect to checking the status of p1.
program/module mutex3
....
var
p1using, p2using : boolean; { in this version we have binary variable which show which process is in the CS]
process p1;
begin
while true do [ do this when the p1 has been initiated]
begin
p1using :=true: [set the flag that shows p1 is using the CS; this has been moved to a earlier execution position; now p1using will be set safely] while p2using do [keeptesting]; [Busy wait if p2 is using the CS]
Critical section
p1using := false; [clear the flag, let others use the CS]
.........
end {while}
end{p1}
You would think after three tries, all the problems would be resolved by now. As we saw with algorithm the problem with setting of p1using does not happen here. Does that mean we have finally resolved all the issues! Looks like, not really. Under some conditions, p1 and p2 can get each other into an infinite loop, hanging everything thereby.
Suppose p1 is preempted after it has set the p1using. p2 which is executing now will drop down to p1using and loop forever. Now let's say p1 gets control back and tries to drop to the CS and finds p2using true and will start looping on that. Thus, even though the resource is free none of the processes are getting access to it and hangs everything! No one getting an access to the resource violates one of the fundamental properties.
Having looked at all these problems, we can easily appreciate the simplicity and elegance of Djikstra's semaphore solution. I'll advise you to look at the semaphore and the operations defined and see, if any of the problems that came up with our tying the various solutions, are avoided completely or not.
program/module mutex3
....
var
p1using, p2using : boolean; { in this version we have binary variable which show which process is in the CS]
process p1;
begin
while true do [ do this when the p1 has been initiated]
begin
p1using :=true: [set the flag that shows p1 is using the CS; this has been moved to a earlier execution position; now p1using will be set safely] while p2using do [keeptesting]; [Busy wait if p2 is using the CS]
Critical section
p1using := false; [clear the flag, let others use the CS]
.........
end {while}
end{p1}
You would think after three tries, all the problems would be resolved by now. As we saw with algorithm the problem with setting of p1using does not happen here. Does that mean we have finally resolved all the issues! Looks like, not really. Under some conditions, p1 and p2 can get each other into an infinite loop, hanging everything thereby.
Suppose p1 is preempted after it has set the p1using. p2 which is executing now will drop down to p1using and loop forever. Now let's say p1 gets control back and tries to drop to the CS and finds p2using true and will start looping on that. Thus, even though the resource is free none of the processes are getting access to it and hangs everything! No one getting an access to the resource violates one of the fundamental properties.
Having looked at all these problems, we can easily appreciate the simplicity and elegance of Djikstra's semaphore solution. I'll advise you to look at the semaphore and the operations defined and see, if any of the problems that came up with our tying the various solutions, are avoided completely or not.
[CS 704D] Mutual Exclusion and Trial Algorithms- 2
As we saw, the first trial algorithm forced a strict alternating access(round robin with more than two processes) to the two processes. The whole thing ran at the speed of the slower process and when one process crashes, it can block the other process forever. Also any change in any one process needed careful changes in the other process. With more than two processes, that could be a handful.
The second attempt tries to remedy most of the problems. Let's take a look at that first before we see if this too has problems.(the pseudo code example from Operating Systems-MilanMilenkovic).
program/module mutex2
....
var
p1using, p2using : boolean; { in this version we have binary variable which show which process is in the CS]
process p1;
begin
while true do [ do this when the p1 has been initiated]
begin
while p2using do [keeptesting]; [Busy wait if p2 is using the CS]
p1using :=true: [set the flag that shows p1 is using the CS]
Critical section
p1using := false; [clear the flag, let others use the CS]
.........
end {while}
end{p1}
process p2;
begin
while true do [ do this when the p2 has been initiated]
begin
while p1using do [keeptesting]; [Busy wait if p1 is using the CS]
p2using :=true: [set the flag that shows p2 is using the CS]
Critical section
p2using := false; [clear the flag, let others use the CS]
.........
end {while}
end{p2}
{parent process}
begin {mutex2}
p1using :=false; [ usage flags are turned off, whichever process reaches the critical section can start]
p2using := false:
Initiate p1, p2 [p1 and p2 are initiated to start]
end {mutex2}
Now clearly there are no strict turn taking. depending on the speed at which p1 or p2 (or more of processes) operate, and controlled by the mutex the processes operate smoothly. One process crashing does not hold up the other processes No process needs to know what others are doing, any changes needed can be done by themselves. That is mush safer and cleaner way of doing things.
One of the conditions of a well behaved mutual exclusion is that no more than one process could be operating inside the critical section.This is how this can happen with this algorithm.
1. Both processes are outside of their respective critical sections
2. p1 checks p2using and it is free now, so p1 sets p1using and enters CS.
3.but if p2 preempts p1 before p1 could have set p1 using, then p2 will find it is ok to enter CS, sets p2using.
4. p1 resumes now (it has already found p2using to be false, so goes ahead and sets p1using to be true and enters the CS.
Now, that beats the main condition of mutual exclusion!
To be entirely safe the global variable p1 using and p2using should both be set inside the protection of mutex too.
Sunday, September 12, 2010
Amoeba OS from Tanenbaum (Yes, that Tanenbaum)
Mr. Tanenbaum sent us all the details about the Amoeba OS that he developed as a teaching aid. I have asked a faculty friend of mine to get all the data downloaded and to set the OS up. As discussed in the class, I need a couple of volunteers to study this and make others aware of what a multi user OS should be like. In fact, one of the teams studying other operating systems can drop what they have chosen and take up Amoeba for their presentation.
Tuesday, September 7, 2010
[EI 502, IC 503, EI 405, EI 611] Ready & Hold Signal in a Microprocessor
This is a favorite question of mine and I have been able to stump a large number of interviewees during my corporate life.
Question: What is the difference in the ready and hold signals offered on microprocessors?
Answer: Let's deal with the hold signal related action first. Whenever bus masters other than the CPU needs to start transferring data to the memory, through the DMA action, it needs to drive the system bus. The DMA controller on the other bus master will have to drive the starting memory address from/to where data is to be transferred. It will control data and control buses to actually transfer the required amount of data. To be able to do this there should not be any other bus masters electrically. So when requested for the bus by another bus master, the CPU will acknowledge with a hold acknowledge signal ( HLDA in 8085, or something named similarly) after it tri-states the bus (makes the interface of the CPU with the bus in high impedance mode). The bus master can start driving the bus now and communicate with the memory system.
The ready signal on the other hand helps comparatively slower memory subsystem to communicate with the CPU read/write requests. The CPU is usually very fast. It communicates with the memory system synchronously, in sync with a system clock. Which means the CPU can set up address lines, puts out read or write signal after the addresses have stabilized. After a write signal, for example, the CPU samples the data lines after a fixed time, a multiple of the clock signal. Now, if the memory is not fast enough to put out data in that time, the CPU will read wrong data. To overcome this problem the memory device can request a delay by activating a signal named "Ready", the CPU will wait one clock cycle extra and then sample this line again. The CPU completes the cycle in the next clock cycle, when the ready signal is removed. So, this is a means of synchronization of memory and a master device. Yes, when a different bus master takes over the bus, similar problems may exist and is solved through the ready mechanism. This time the ready signal must go to the read/write controller of the DMA master operating the bus.
Question: What is the difference in the ready and hold signals offered on microprocessors?
Answer: Let's deal with the hold signal related action first. Whenever bus masters other than the CPU needs to start transferring data to the memory, through the DMA action, it needs to drive the system bus. The DMA controller on the other bus master will have to drive the starting memory address from/to where data is to be transferred. It will control data and control buses to actually transfer the required amount of data. To be able to do this there should not be any other bus masters electrically. So when requested for the bus by another bus master, the CPU will acknowledge with a hold acknowledge signal ( HLDA in 8085, or something named similarly) after it tri-states the bus (makes the interface of the CPU with the bus in high impedance mode). The bus master can start driving the bus now and communicate with the memory system.
The ready signal on the other hand helps comparatively slower memory subsystem to communicate with the CPU read/write requests. The CPU is usually very fast. It communicates with the memory system synchronously, in sync with a system clock. Which means the CPU can set up address lines, puts out read or write signal after the addresses have stabilized. After a write signal, for example, the CPU samples the data lines after a fixed time, a multiple of the clock signal. Now, if the memory is not fast enough to put out data in that time, the CPU will read wrong data. To overcome this problem the memory device can request a delay by activating a signal named "Ready", the CPU will wait one clock cycle extra and then sample this line again. The CPU completes the cycle in the next clock cycle, when the ready signal is removed. So, this is a means of synchronization of memory and a master device. Yes, when a different bus master takes over the bus, similar problems may exist and is solved through the ready mechanism. This time the ready signal must go to the read/write controller of the DMA master operating the bus.
Labels:
DMA,
Hold,
Memory,
Microprocessor,
Ready,
Synchronization
[CS 704D] Mutual Exclusion and Trial Algorithms
The semaphore idea offered by Dijikstra was a brilliant idea, it created a simple mechanism for setting up mutual exclusion. Until that happened, several algorithms were tried to solve the problem of safeguarding a critical section through mutual exclusion. The purpose of this post is to discuss these algorithms and look at what they had to offer. In today's post we discuss the first algorithm. The reference source is,
Operating Systems Concepts and Design - Milan Milenkovic, TMH
A typical coding for a mutex process, named mutex-1 here could be as follows.Comments are included in [ ] brackets.
program/module mutex-1;
.
.
.
type
who= (proc1), (proc2) ; [an enumerated, user defined type define to processes in existence]
var
turn : who; [a variable turn is defined to contain the id of the process that is to be handled]
process p1; [pseudo code for the process 1]
begin
while true do [once a process is initiated]
begin
while turn= proc2 do (keeptesting); [ this is the busy-wait part of controlling the critical section]
CRITICAL SECTION;
turn = proc2; [release the semaphore, in this case explicitly assign to the other process]
.
. [ code for other actions in process p1]
end (while);
end (p1)
The control mechanism forces a strict turn on the two processes. In a computer system the processes will be running at unpredictable speeds compared to each other. Thus, the slower process will force an overall average speed of the slower process on all the processes. Besides, there is a more serious problem. If one process crashes it will stop the other process permanently. Further, if it crashes after acquiring the lock and then crashes, the other process will not get access to the resource, even if it is not being used. One way around is to let the OS handle the procedures to be used in the critical section. the OS would be able to use time outs to detect if a crash and stoppage has occurred.
Either process will have to be aware of the other. In general that is bad policy. One process can, maliciously, affect the operation of the other.
Operating Systems Concepts and Design - Milan Milenkovic, TMH
A typical coding for a mutex process, named mutex-1 here could be as follows.Comments are included in [ ] brackets.
program/module mutex-1;
.
.
.
type
who= (proc1), (proc2) ; [an enumerated, user defined type define to processes in existence]
var
turn : who; [a variable turn is defined to contain the id of the process that is to be handled]
process p1; [pseudo code for the process 1]
begin
while true do [once a process is initiated]
begin
while turn= proc2 do (keeptesting); [ this is the busy-wait part of controlling the critical section]
CRITICAL SECTION;
turn = proc2; [release the semaphore, in this case explicitly assign to the other process]
.
. [ code for other actions in process p1]
end (while);
end (p1)
process p2; [pseudo code for the process 2]
begin
while true do [once a process is initiated]
begin
while turn= proc1 do (keeptesting); [ this is the busy-wait part of controlling the critical section]
CRITICAL SECTION;
turn = proc1; [release the semaphore, in this case explicitly assign to the other process]
.
. [ code for other actions in process p2]
end (while);
end (p1);
[parent process]
begin (mutex1)
turn = .....; [ turn can be initiated to proc1 or proc2; this decides who starts first]
initiate p1,p2;
end [mutex1]
The control mechanism forces a strict turn on the two processes. In a computer system the processes will be running at unpredictable speeds compared to each other. Thus, the slower process will force an overall average speed of the slower process on all the processes. Besides, there is a more serious problem. If one process crashes it will stop the other process permanently. Further, if it crashes after acquiring the lock and then crashes, the other process will not get access to the resource, even if it is not being used. One way around is to let the OS handle the procedures to be used in the critical section. the OS would be able to use time outs to detect if a crash and stoppage has occurred.
Either process will have to be aware of the other. In general that is bad policy. One process can, maliciously, affect the operation of the other.
Wednesday, September 1, 2010
[CS 704D] First Two Sets of Presentations
The first two sets of presentations are already on show.zoho.com/public/ddas15847 already. There may be difficulty in downloading the same. I have also placed them now at slideshare.net/ddas15847. You should not face any problems downloading the presentations from here. The links are as below.
Zoho
Slideshare
Zoho
Slideshare
This is for My Students and Anyone seeking Answers to Questions
This blog site is for my students to provide supplementary notes to the courses I am teaching. Microprocessor based courses are also close to my heart. I'll be posting notes on these courses, Students should feel free to ask questions in the posts. I shall be answering to these questions as soon as possible.
Subscribe to:
Comments (Atom)
