CS2106 2002 -N
1.1
D
1.2
B
1.3
C
1.4
In the user-mode thread, the switching is decided by the scheduler inside the process, every time it does a switch, it will save the registers and sharing variables in the threads, therefore there will not exist a mutual corrupt in it.
1.5
2 ms
1.6
(100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms
1.7
(100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
1.8
So that a process which is inactive for long time is more likely to run.
1.9
55 ns
1.10
5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25
1.11
7,2,0,5
1.12
2,7,5,0
1.13
The owner of the file can write, read, execute the file; the group the owner belong to can read and execute the file; other can do nothing.
1.14
5
1.15
3
2.1
2.2
Process descriptors: need to change the state of the process RUNNING to READY or BLOCKED, save the registers and global variables to the stacks and change the stack pointers to the process descriptors.
VM structures: load the page tables of the switched process; change the pointers PGTR of the page table to the one it switch to,
2.3
The user level threads are scheduled by the scheduler in the process, and can only be switched when it need to, and the scheduler in the process is in charge of saving the status variables in the stack of the thread;
The kernel level threads: the kernel scheduler treats the kernel level threads just as the processes: it will schedule the threads at any time using interrupts, therefore it leaves to the programmer to save its statues when switched out, the switching is slower compared to the user level threads
3.1
The mainline initializes a (counting) semaphore to 4. Mainline has an (infinite) loop which
blocks on P(sem). After the P(sem), the mainline starts a new process. When processes exit, they
must do a V(sem) on the (shared) semaphore.
3.2
Tutorial
3.3
4.1
Check the third bit of the register cs: if it is 1 then use the GDT pointer to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the entry's "present" and "granularity" bit and the constructed "base" and "limit" part, use offset and use "limit" and "granularity" to check if it is out of the memory range. If it is not, add the eip to base to generate the linear address. We can use cache mechanism to speed up the look up in the GDT/LDT entries using the index.
4.2
Tutorial
4.3
The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;
5.1
6.1
In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.
6.2
6.3
7.1
If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.
The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks; Second even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.
7.2
Anyone who knows 2.2 and 3.1????
> 1.2
>
> B
>
>
>
> 1.3
>
> C
>
>
>
> 1.6
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms
Why isn't it 5 + 10/2 + 2/10*10 = 12?
> 1.7
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
Why isn't it 2*(5+10/2+1/10*10) = 22?
> 1.9
>
> 55 ns
three level...so...it is 50+50+50+5=155?
> 1.10
>
> 5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25
so...0.95*5+0.05*155=...
> 2.1
A process called 'init' will set the child process's PPID to 1 (the
process init).
I think the reason is that...exit code of child process must be
accepted by its parent...otherwise the child process will remain zombie
forever.
> 2.2
>
> change the stack pointers to the process descriptors.
Does Stack pointer point to process descriptor?
> 4.1
>
> Check the third bit of the register cs: if it is 1 then use the GDT pointer
ah...not the third bit...it is ... the 14th bit?
> to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use
> the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the
> entry's "present" and "granularity" bit and the constructed "base" and
> "limit" part, use offset and use "limit" and "granularity" to check if it is
> out of the memory range. If it is not, add the eip to base to generate the
> linear address. We can use cache mechanism to speed up the look up in the
> GDT/LDT entries using the index.
> 5.1
Interrupt driven, as it does not send signal frequently...
And a satellite processor because we want to integrate it...wild
guess..
> 6.3
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....
> 1.1
>
> D
Does it slows down CPU? I think the clock frequency is not
affected...Er...
> 1.2
>
> B
>
>
>
> 1.3
>
> C
>
>
>
> 1.6
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms
Why isn't it 5 + 10/2 + 2/10*10 = 12?
> 1.7
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
Why isn't it 2*(5+10/2+1/10*10) = 22?
> 1.9
>
> 55 ns
three level...so...it is 50+50+50+5=155?
> 1.10
>
> 5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25
so...0.95*5+0.05*155=...
> 2.1
A process called 'init' will set the child process's PPID to 1 (the
process init).
I think the reason is that...exit code of child process must be
accepted by its parent...otherwise the child process will remain zombie
forever.
> 2.2
>
> change the stack pointers to the process descriptors.
Does Stack pointer point to process descriptor?
> 4.1
>
> Check the third bit of the register cs: if it is 1 then use the GDT pointer
ah...not the third bit...it is ... the 14th bit?
> to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use
> the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the
> entry's "present" and "granularity" bit and the constructed "base" and
> "limit" part, use offset and use "limit" and "granularity" to check if it is
> out of the memory range. If it is not, add the eip to base to generate the
> linear address. We can use cache mechanism to speed up the look up in the
> GDT/LDT entries using the index.
> 5.1
Interrupt driven, as it does not send signal frequently...
And a satellite processor because we want to integrate it...wild
guess..
> 6.3
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....
Then you should not add it here.
You need to compute the probability...then you obtain
prob * 2/10*10 + (1-prob) * (1/10*10 + 2 + 10/2 + 1/10*10)
as the probability is near 1...so...i don't think it is a big matter
> > 1.7
> > >
> > > (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
> > Why isn't it 2*(5+10/2+1/10*10) = 22?
>
>
> why is this? I dont understand...
to access one sector...we need 5+10/2+1/10*10
so for two sectors...
1.4
In the user-mode thread, the switching is decided by the scheduler inside the process, every time it does a switch, it will save the registers and sharing variables in the threads, therefore there will not exist a mutual corrupt in it.
1.7
> > > > > > (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
> > Why isn't it 2*(5+10/2+1/10*10) = 22?
> > > why is this? I dont understand...
to access one sector...we need 5+10/2+1/10*10 so for two sectors... I think there is something more interesting. for the two sector, the probabilities that no switch between cylinder are 1/2. So the average switch times are both 1/2 * 5 Is that right?
1.8
So that a process which is inactive for long time is more likely to run.
> 2.1
A process called 'init' will set the child process's PPID to 1 (the process init). I think the reason is that...exit code of child process must be accepted by its parent...otherwise the child process will remain zombie forever. Every process will send a signal to it parent process. If the parent process ignore the signal, the child process becomes zombie. In Unix/Linux, processes are hierachical. Every process has its parent process except the init process. so the child process needs a new parent process when it original process exit. That new parent process is init
2.2
Process descriptors: need to change the state of the process RUNNING to READY or BLOCKED, save the registers and global variables to the stacks and change the stack pointers to the process descriptors.
VM structures: load the page tables of the switched process; change the pointers PGTR of the page table to the one it switch to,
textbook P55 conditionvariable.c
3.3
4.3
The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;
6.1
In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.
6.2
> 6.3
It will improve reliability to an extent...but with...too big overhead... And...this can't prevent data block corruption. bad idea.... one more thing, what does the system do when it finds out that three inode entries are different from each other. When a file is added, appended, deleted, modified. Three inode entries must be modified. Too many disk operations.
7.1
If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.
The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks;
Second even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.
TLB is a cache, In Pentium, it's in the CPU.4.3
The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;
The operation is not search in CAM. It is comparision all the values in the same time.
Lab5 ex2?6.1
In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.
6.2
> 6.3It will improve reliability to an extent...but with...too big overhead... And...this can't prevent data block corruption. bad idea.... one more thing, what does the system do when it finds out that three inode entries are different from each other. When a file is added, appended, deleted, modified. Three inode entries must be modified. Too many disk operations.why to enlarge the block size, just use more blocks.7.1
If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.
The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks;
I think the problem lies in that when one disks are added or removed, then the whole inode table will affectedSecond even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.
ShangHai
China.P.R
textbook P55 conditionvariable.c
3.3
I also know this code, but what if we remove this while loop?
TLB is a cache, In Pentium, it's in the CPU.4.3
The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;
The operation is not search in CAM. It is comparision all the values in the same time.Cant understand... Can you say it more clearly? may be in chinese