CS2106 2002 -N

0 views
Skip to first unread message

guozhang wang

unread,
Nov 23, 2006, 10:23:54 PM11/23/06
to hexakios

CS2106   2002 -N

 

1.1

D

 

1.2

B

 

1.3

C

 

1.4

In the user-mode thread, the switching is decided by the scheduler inside the process, every time it does a switch, it will save the registers and sharing variables in the threads, therefore there will not exist a mutual corrupt in it.

 

1.5

2 ms

 

1.6

(100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms

 

1.7

(100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms

 

1.8

So that a process which is inactive for long time is more likely to run.

 

1.9

55 ns

 

1.10

5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25

 

1.11

7,2,0,5

 

1.12

2,7,5,0

 

1.13

The owner of the file can write, read, execute the file; the group the owner belong to can read and execute the file; other can do nothing.

 

1.14

5

 

1.15

3

 

2.1

 

2.2

Process descriptors: need to change the state of the process RUNNING to READY or BLOCKED, save the registers and global variables to the stacks and change the stack pointers to the process descriptors.

VM structures: load the page tables of the switched process; change the pointers PGTR of the page table to the one it switch to,

 

2.3

The user level threads are scheduled by the scheduler in the process, and can only be switched when it need to, and the scheduler in the process is in charge of saving the status variables in the stack of the thread;

The kernel level threads: the kernel scheduler treats the kernel level threads just as the processes: it will schedule the threads at any time using interrupts, therefore it leaves to the programmer to save its statues when switched out, the switching is slower compared to the user level threads

 

3.1

The mainline initializes a (counting) semaphore to 4. Mainline has an (infinite) loop which

blocks on P(sem). After the P(sem), the mainline starts a new process. When processes exit, they

must do a V(sem) on the (shared) semaphore.

 

3.2

Tutorial

 

3.3

 

4.1

Check the third bit of the register cs: if it is 1 then use the GDT pointer to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the entry's "present" and "granularity" bit and the constructed "base" and "limit" part, use offset and use "limit" and "granularity" to check if it is out of the memory range. If it is not, add the eip to base to generate the linear address. We can use cache mechanism to speed up the look up in the GDT/LDT entries using the index.

 

4.2

Tutorial

 

4.3

The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;

 

5.1

 

6.1

In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.

 

6.2

 

6.3

 

7.1

If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.

The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks; Second even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.

 

7.2

 

 

 

 

Anyone who knows 2.2 and 3.1????



--
Computer Science and Engineering Department
Information Technology School
Fudan University,  200433
ShangHai
China.P.R

Li Yi

unread,
Nov 23, 2006, 11:37:22 PM11/23/06
to Hexakios
> 1.1
>
> D
Does it slows down CPU? I think the clock frequency is not
affected...Er...

> 1.2
>
> B
>
>
>
> 1.3
>
> C


>
>
>
> 1.6
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms

Why isn't it 5 + 10/2 + 2/10*10 = 12?

> 1.7
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms

Why isn't it 2*(5+10/2+1/10*10) = 22?

> 1.9
>
> 55 ns
three level...so...it is 50+50+50+5=155?

> 1.10
>
> 5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25

so...0.95*5+0.05*155=...

> 2.1
A process called 'init' will set the child process's PPID to 1 (the
process init).
I think the reason is that...exit code of child process must be
accepted by its parent...otherwise the child process will remain zombie
forever.

> 2.2


>
> change the stack pointers to the process descriptors.

Does Stack pointer point to process descriptor?

> 4.1
>
> Check the third bit of the register cs: if it is 1 then use the GDT pointer

ah...not the third bit...it is ... the 14th bit?


> to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use
> the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the
> entry's "present" and "granularity" bit and the constructed "base" and
> "limit" part, use offset and use "limit" and "granularity" to check if it is
> out of the memory range. If it is not, add the eip to base to generate the
> linear address. We can use cache mechanism to speed up the look up in the
> GDT/LDT entries using the index.

> 5.1
Interrupt driven, as it does not send signal frequently...
And a satellite processor because we want to integrate it...wild
guess..

> 6.3
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....

guozhang wang

unread,
Nov 23, 2006, 11:49:58 PM11/23/06
to Hexa...@googlegroups.com


2006/11/24, Li Yi <liy...@gmail.com>:

> 1.1
>
> D
Does it slows down CPU? I think the clock frequency is not
affected...Er...

> 1.2
>
> B
>
>
>
> 1.3
>
> C
>
>
>
> 1.6
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms
Why isn't it 5 + 10/2 + 2/10*10 = 12?
 
 1/10 * 10 is for the expected switch time if the second sector is in the begining of the next cylinder and the first one is in the last sector of current one
 

> 1.7
>
> (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
Why isn't it 2*(5+10/2+1/10*10) = 22?
 
why is this? I dont understand...
 

> 1.9
>
> 55 ns
three level...so...it is 50+50+50+5=155?
 
my fault

> 1.10
>
> 5 * 0.95 + 50 * 0.05 = 4.75 + 2.5 = 7.25
so...0.95*5+0.05*155=...

> 2.1
A process called 'init' will set the child process's PPID to 1 (the
process init).
I think the reason is that...exit code of child process must be
accepted by its parent...otherwise the child process will remain zombie
forever.
 
can you say it more clearly?

> 2.2
>
> change the stack pointers to the process descriptors.
Does Stack pointer point to process descriptor?

> 4.1
>
> Check the third bit of the register cs: if it is 1 then use the GDT pointer
ah...not the third bit...it is ... the 14th bit?
> to the GDT table; if it is 0 then use the LDT pointer to the LDT table. Use
> the upper 13 bits as the index to fetch the entry in the GDT/LDT, check the
> entry's "present" and "granularity" bit and the constructed "base" and
> "limit" part, use offset and use "limit" and "granularity" to check if it is
> out of the memory range. If it is not, add the eip to base to generate the
> linear address. We can use cache mechanism to speed up the look up in the
> GDT/LDT entries using the index.

> 5.1
Interrupt driven, as it does not send signal frequently...
And a satellite processor because we want to integrate it...wild
guess..

> 6.3
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....

Li Yi

unread,
Nov 23, 2006, 11:59:55 PM11/23/06
to Hexakios
> > > 1.6
> > >
> > > (100/101 * 5 + 10/2 + 2 + 1/10 * 10) = 13 ms
> > Why isn't it 5 + 10/2 + 2/10*10 = 12?
>
>
> 1/10 * 10 is for the expected switch time if the second sector is in the
> begining of the next cylinder and the first one is in the last sector of
> current one

Then you should not add it here.
You need to compute the probability...then you obtain
prob * 2/10*10 + (1-prob) * (1/10*10 + 2 + 10/2 + 1/10*10)
as the probability is near 1...so...i don't think it is a big matter

> > 1.7
> > >
> > > (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
> > Why isn't it 2*(5+10/2+1/10*10) = 22?
>
>
> why is this? I dont understand...

to access one sector...we need 5+10/2+1/10*10
so for two sectors...

Huang Maoliang

unread,
Nov 24, 2006, 2:02:24 AM11/24/06
to Hexa...@googlegroups.com
guozhang wang 写道:

1.4

In the user-mode thread, the switching is decided by the scheduler inside the process, every time it does a switch, it will save the registers and sharing variables in the threads, therefore there will not exist a mutual corrupt in it.

I think in this case because no two threads in the same process running at the same time. Scheduler in the process does not pre-emptive, avoiding two threads accessing the shareing variables simultaneously. Swich only occurs when one thread yields the CPU time. This is something like disable interrupts to avoid CS I think.



1.7
> > >
> > > (100/101 * 5 + 10/2 + 2 + 1/10 * (5 + 10/2) ) = 13 ms
  
> > Why isn't it 2*(5+10/2+1/10*10) = 22?
>
>
> why is this? I dont understand...
to access one sector...we need 5+10/2+1/10*10
so for two sectors...

I think there is something more interesting. for the two sector, the probabilities that no switch between cylinder are 1/2. So the average  switch times are both 1/2 * 5
Is that right?



1.8

So that a process which is inactive for long time is more likely to run.

I think if the process is IO bound, it tends to be inactive. If it is selected to run when it requires, then it can issue an IO request as quickly as possible, then it becomes blocked again.
> 2.1
  
A process called 'init' will set the child process's PPID to 1 (the
process init).
I think the reason is that...exit code of child process must be
accepted by its parent...otherwise the child process will remain zombie
forever.

Every process will send a signal to it parent process. If the parent process ignore the signal, the child process becomes zombie.
In Unix/Linux, processes are hierachical. Every process has its parent process except the init process.
so the child process needs a new parent process when it original process exit. That new parent process is init

2.2

Process descriptors: need to change the state of the process RUNNING to READY or BLOCKED, save the registers and global variables to the stacks and change the stack pointers to the process descriptors.

VM structures: load the page tables of the switched process; change the pointers PGTR of the page table to the one it switch to,

I think the LDT/GDT will change too.

3.3

 

textbook P55 conditionvariable.c

4.3

The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;

TLB is a cache, In Pentium, it's in the CPU.
The operation is not search in CAM. It is comparision all the values in the same time.

6.1

In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.

 

6.2

Lab5 ex2?

> 6.3
  
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....

one more thing, what does the system do when it finds out that three inode entries are different from each other.
When a file is added, appended, deleted, modified. Three inode entries must be modified. Too many disk operations.

7.1

If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.

The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks;

why to enlarge the block size, just use more blocks.

Second even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.

I think the problem lies in that when one disks are added or removed, then the whole inode table will affected

guozhang wang

unread,
Nov 24, 2006, 2:13:59 AM11/24/06
to Hexa...@googlegroups.com


2006/11/24, Huang Maoliang <kc...@163.com>:
I also know this code, but what if we remove this while loop?

4.3

The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;

TLB is a cache, In Pentium, it's in the CPU.
The operation is not search in CAM. It is comparision all the values in the same time.
 
Cant understand... Can you say it more clearly? may be in chinese

6.1

In EXT2, inodes are grouped in different block groups, with each group having the same super blocks, group descriptors, and the first 1024 byte is reserved for boot sector, and use i_block arrays of pointers pointing to the data block. Grouped inodes and blocks make use of the principle of locality to speed up the whole seek time of the disk files; the duplicate super blocks and group descriptors enable the system to use the data of other groups to save the damaged group; i_block mechanism enables the system to access the file content randomly, thus speed up the fetching time.

 

6.2

Lab5 ex2?
 
I'm just wondering: how can we locate a specific "block" using just an inode "number"? it should contains many blocks, or I got the wrong understanding?
 

> 6.3
  
It will improve reliability to an extent...but with...too big
overhead...
And...this can't prevent data block corruption.
bad idea....

one more thing, what does the system do when it finds out that three inode entries are different from each other.
When a file is added, appended, deleted, modified. Three inode entries must be modified. Too many disk operations.

7.1

If we merge all the inode tables into one, then we can save the time of checking and finding which inode table to look up to find the inode checking the block descriptors. And we would not need to check the block descriptors to find the specific inode table block.

The difficulty is mainly two: First when there are lots of inodes it might not be enough for one block to fill in the whole inode table, thus we have to enlarge the block size of all blocks;

why to enlarge the block size, just use more blocks.

Second even if we can fill the whole inode table into one block, we can find the exact entry using the inode number, the entry's block number can be very big thus we must save the entry in a bigger size.

I think the problem lies in that when one disks are added or removed, then the whole inode table will affected



ShangHai
China.P.R

Huang Maoliang

unread,
Nov 24, 2006, 2:57:59 AM11/24/06
to Hexa...@googlegroups.com
guozhang wang 写道:


3.3

 

textbook P55 conditionvariable.c
 
I also know this code, but what if we remove this while loop?
Without the while loop, the thread blocks itself immediately. But the code on the text book is that when the condition is false, it blocks itself. What would happen if the condition is false forever? the thread sleeps forever, because no one wakes it up.



4.3

The TLB(Translation Look-aside Buffer) is cached in the CPU, not in memory, as we know, operations on CPU is much faster than in memory, it is one factor of the speed up; Another factor goes to the CAM memory to enable high speed look-ups, different from normal memories, the input of CAM is data and the output of CAM is address, what's in CAM is a hardware managed array, and need only one operation to search the entire memory of CAM, so it is much faster than RAM;

TLB is a cache, In Pentium, it's in the CPU.
The operation is not search in CAM. It is comparision all the values in the same time.
 
Cant understand... Can you say it more clearly? may be in chinese

all the linear address and the corresponding physical address are stored in the TLB. In translation, The CPU sends the linear address to TLB, which is a CAM. The circuit inside the CAM is well designed so that one linear address comes, it returen the physical address. How does it finds out the linear address? It is the affair of the designer.
I'm sorry that it may not compare all the contents in the CAM, maybe some part of it.
Reply all
Reply to author
Forward
0 new messages