CS2106 2003 -N
1.1
1.2
For security reasons the s-bit works only when used on binaries (compiled code) and not on scripts (an exception are perl scripts). This causes the file to be executed under the user-ID of the user that owns the file rather than the user that executes the file.
1.3
All the files that are opened in the parent process will be opened in the forked process
1.4
1.5
100 * 2 * 1/3 + 10/2 + 80/10 * 10 + 6/10 * 10
1.6
When a process using fork() to create a new process, the child process runs and finishes, after it is finished, the resource of the process is deallocated, but if the parent process does not use wait() or waitpid() to wait and check for its completion, it will reserve its slot in the process table and wait for its parent checks it, there comes the zombie state.
1.7
0x00108
1.8
0xc0000008
1.9
50 * 0.1 + 5 * 0.9 = 9.5 ns
1.10
50 + 5 = 55 ns
1.11
1,5,0,2,7
1.12
LRU, 5
1.13
If we use new=original, when the original descriptor close() the file it prints to, it close the whole file and therefore new descriptor points to an closed file, when you want to access the file using new, you will get a Error of NullPointer
1.14
C ( 纯蒙)
1.15
B, C
2.1
1. Scheduling
OS need to access the process descriptor to change its state (if necessary) to do the scheduling between processes. Thus the states of the process should not be accessed by the process itself, to avoid it being able to change its own state to confuse the OS (say there are two process which are RUNNING)
2. Stacks
In process descriptors there are stack pointers which points to the stacks storing the parameters passing through during system calls and something else, it is used for OS to switch store registers before switching from one process to another, therefore process itself should not access this directly: if it access the pointer and modified it, the OS would go horribly wrong when it switch back to some process.
2.2
2.3
Threads over process:
Time efficiency: use much less time to create and terminate a new thread than create a new process; use less time to switch between threads than between processes.
Communicate: it is much easier to communicate between threads rather than processes, which would involve kernel.
Process over Threads:
Security: under the system mode threads, switch can happen at any time, therefore may cause mutual corrupt in global variable between threads; when one threads of the process is blocked, all the threads in that process will be blocked.
When the applications can be seems as executing a series of relevant tasks we prefer to use threads, if they were independent tasks we prefer processes.
3.1
3.2
( 纯蒙,I guess) There may exist some situations that need the waiting process to respond immediately when it can enter its CS to process, therefore blocking mechanism is not suitable to handle is situation, we must use a busy waiting, so we implement a spinlock using the semaphore.
4.1
When the process doing the "malloc", to mallocs 1MB which is 256 pages for its usage, therefore the look up tables will add 256 page entry in its page tables; is there is a page table contains 256 empty entries to fill, just add them to this page table, if there is no one, need to create a new page table and add one entry to the page directory.
4.2
F > P: max 0 min 0
F < P: max L min P – F
4.3 (Don't know the level of the speed up)
From logical to linear address, there is cache mechanism to speed up;
From linear to physical address, there is also cache (TLB using CAM) mechanism to speed up;
5.1
To separate the physical level manipulation to the logical level of management.
5.2
1) Size is small (nearly no overhead); speed is very slow (every time you need disk access), can not randomly access
2) Size is larger than i_block, speed is faster, overhead is very big (every block has a pointer, all directly)
5.3
Bitmaps are quick in de-fragmentation, but very slow in allocation and de-allocation; linked list is harder to manage in de-fragmentation, but very fast in allocation and de-allocation.
6.1
Heap, stack, argument, text, initialized data, bss
What is 6/10*10 for?
> 1.9
> 50 * 0.1 + 5 * 0.9 = 9.5 ns
105 * 0.1 + 5 * 0.9
> 1.10
>
> 50 + 5 = 55 ns
105ns
(you need to access to the page table first, then the memory...as page
table is stored in the memory...we accessed to the memory twice)
>
> 1.11
>
> 1,5,0,2,7
1 5 2 0 7?
>
> 1.12
>
> LRU, 5
LRU 9?
> 2.2
If a lower-priority process blocks a higher-priority one, let it has
the higher priority temporarily.
> 3.1
(Wild Guess...
sem_wait() : msgrcv(p, ...) for all process p... take disjuntion?
sem_post(): msgsnd(p, ...) for all process p...
)
> 4.1
>
> When the process doing the "malloc", to mallocs 1MB which is 256 pages for
> its usage, therefore the look up tables will add 256 page entry in its page
> tables; is there is a page table contains 256 empty entries to fill, just
> add them to this page table, if there is no one, need to create a new page
> table and add one entry to the page directory.
Hi..it is not 1Mb...it is 8Mb...
> 5.3
>
> Bitmaps are quick in de-fragmentation, but very slow in allocation and
> de-allocation; linked list is harder to manage in de-fragmentation, but very
> fast in allocation and de-allocation.
>
Huh? why is bitmap slow in allocation?
bitmap eats more space...hehe...
--
Computer Science and Engineering Department
Information Technology School
Fudan University, 200433
ShangHai
China.P.R
--
Computer Science and Engineering Department
Information Technology School
> 2.3
>
> Threads over process:
>
> Time efficiency: use much less time to create and terminate a new
> thread than create a new process; use less time to switch between
> threads than between processes.
>
> Communicate: it is much easier to communicate between threads rather
> than processes, which would involve kernel.
>
> Process over Threads:
>
> Security: under the system mode threads, switch can happen at any
> time, therefore may cause mutual corrupt in global variable between
> threads; when one threads of the process is blocked, all the threads
> in that process will be blocked.
>
> When the applications can be seems as executing a series of relevant
> tasks we prefer to use threads, if they were independent tasks we
> prefer processes.
>
does the communication of process involve kernel? What about those
shareing memory to comunication?
Security: It depends on whether threads use process scheduler or kernel
scheduler. If process-scheduler is used, it is safe to share variable
among threads in the same process. Threads won't be pre-emptive by other
threads in the same process. If kernel-scheduler is uses, then the
threads are taken as processes. processes shareing variables may corrupt
each other if carelessly designed
>> 4.1
>>
>> When the process doing the "malloc", to mallocs 1MB which is 256 pages for
>> its usage, therefore the look up tables will add 256 page entry in its page
>> tables; is there is a page table contains 256 empty entries to fill, just
>> add them to this page table, if there is no one, need to create a new page
>> table and add one entry to the page directory.
>>
> Hi..it is not 1Mb...it is 8Mb...
>
and if one page table is create, one frame is needed to hold such page
table. if p[0] is written, one frame is allocated.
> 4.3 (Don't know the level of the speed up)
>
> From logical to linear address, there is cache mechanism to speed up;
>
> From linear to physical address, there is also cache (TLB using CAM)
> mechanism to speed up;
>
I think memory management is also important since all the table are
stored in memory or second storage device. Less page fault is of great help.
> 5.2
>
> 1) Size is small (nearly no overhead); speed is very slow (every time
> you need disk access), can not randomly access
>
> 2) Size is larger than i_block, speed is faster, overhead is very big
> (every block has a pointer, all directly)
>
I think the difference between a and b is that in a it is somewhat like
FAT, each block has the first 4 bytes to point to the next block,
followed by data.
in b it is a list of file-pointer-blocks something like the 13 entries
in i_block. It contains pointers to blocks, each of which contains
pointers to blocks containing data. for small file, it is slow but large
file it can be fast. It needs two disk operation to access the file, and
with i_block, it needs 1, 2, 3, or 4 disk operation. for small file, the
inode will be small but waste disk space, for large file, the inode will
be large.
note that I don't define how large or how small a file is.
>
>> 5.3
>>
>> Bitmaps are quick in de-fragmentation, but very slow in allocation and
>> de-allocation; linked list is harder to manage in de-fragmentation, but very
>> fast in allocation and de-allocation.
>>
>>
> Huh? why is bitmap slow in allocation?
> bitmap eats more space...hehe...
>
To find out a consecutive space need to scan the whole bitmap in the
worse time (first ..., best ..., worse ...)
list takes up varied space. list operation in disk is slow. bitmap can
be random access.
I don't understand...My answer is the following...
1
1 0
1 0 2
1 0 2 3
1 0 2 3
1 0 2 3
1 0 2 3
1 0 2 3 4
1 0 2 3 4
1 5 2 3 4
1 5 2 0 4
1 5 2 0 4
1 5 2 0 7
> >> 2.2
> >>
> > If a lower-priority process blocks a higher-priority one, let it has
> > the higher priority temporarily.
> >
> I think when the process can't enter its CS it blocks itself.
> Thus the lower-priority can run.
> how the scheduler knows that the lower-priority process blocks a
> higher-priority. The higher-priority is still running all the time
> though it is busy-waiting. The scheduler do not know whether one process
> want to enter its CS or it's waiting for entering the CS.
The scheduler knows what resources are being used...thus it can detect
blocking...