Автор работы: Пользователь скрыл имя, 19 Ноября 2013 в 00:57, курсовая работа
проблема інформативного перекладу є актуальною для текстів технічної спрямованості, де перекладачеві доводиться стикатися з термінологією певній галузі діяльності, і найчастіше для якої немає аналогів у нашій мові, тому як ми бачимо, наша мова рясніє новомодними словами такими як, наприклад: віджет, гаджет, банер, пассворд, логін та інші. Такі слова-запозичення до нас приходять часто з англійської мови, і особливо IT-термінологія. Це відбувається, тому що англійська мова має більш широку семантику слів, тобто одним словом можна висловити ціле поняття, коли в нашій мові таке ж поняття потрібно описувати кількома словами.
При перекладі науково-технічних текстів зазвичай застосовують певний граматичний лад, стилістику тексту, яка відповідає цілям та завданням вихідного наукового тексту.
Вступ
1. Лексичні ознаки НТЛ
1.1 Терміни
1.2 Абревіатури і скорочення
1.3 Інтернаціоналізми
1.4 Неологізми
2. Трансформації англо-українського перекладу лексики НТЛ
2.1 Запозичення і переклад термінів
2.2 Проблеми перекладу скорочень і абревіатур
2.3 Переклад неологізмів та без еквівалентної лексики
2.4 Переклад інтернаціоналізмів
Висновки
By using these mechanisms and storing the page tables in the operating system’s address space, the operating system can change the page tables while preventing a user process from changing them, ensuring that a user process can access only the storage provided to it by the operating system.
We also want to prevent a process from reading the data of another process. For example, we wouldn’t want a student program to read the grades while they were in the processor’s memory. Once we begin sharing main memory, we must provide the ability for a process to protect its data from both reading and writing by another process; otherwise, sharing the main memory will be a mixed blessing!
Remember that each process has its own virtual address space. Thus, if the operating system keeps the page tables organized so that the independent virtual pages map to disjoint physical pages, one process will not be able to access to another’s data. Of course, this also requires that a user process be unable to change the page table mapping. The operating system can assure safely if it prevents the user process from modifying its own page tables. However, the operating system must be able to modify the page tables. Placing the page tables in the protected address space of the operating system satisfies both requirements.
When processes want to share information in a limited way, the operating system must assist them, since accessing the information of another process requires changing the page table of the accessing process. The write access bit can be used to restrict the sharing to just read sharing, and, like the rest of the page table, this bit can be changed only by the operating system. To allow another process, say, P1, to read the page owned by process P2, P2 would ask the operating system to create a page table entry for a virtual page in P1’s address space that points to the same physical page that P2 wants to share. The operating system could use the write protection bit to prevent P1 from writing the data, if that was any P2’s wish. Any bits, that determine the access rights for a page must be included in both the page table and the TLB, because the page table is access only on a TLB miss.
Elaboration: When the operating system decides to change from running process P1 to running process P2 (called a context switch or process switch), it must ensure that P2 cannot get access to the page tables of P1 because that would compromise protection. If there is no TLB, it suffices to change the page table register to point to P2’s page (rather then to P1’s); with a TLB, we must clear the TLB entries that belong to P1-both to protect the data of P1 and to force the TLB to load the entries for P2. If the process switch rate were high, this could be quite inefficient. For example, P2 might load only a few TLB entries before the operating system switched back to P1. Unfortunately, P1 would then find that all its TLB entries were gone and would have to pay TLB misses to reload them. This problem arises because the virtual addresses used by P1 and P2 are the same, and we must clear out the LTB to avoid confusing these addresses.
A common alternative is to extend virtual address space by adding a process identifier or talk identifier. The Intrinsity FastMATH has an 8-bit address space ID (ASID) field for this purpose. This small field indentifies the currently running process; it is kept in a register loaded by the operating system when it switches processes. The process identifier is concatenated to tag portion of the TLB, so that a TLB hit occurs only it both the page number and the process identifier match. This combination eliminates the need to clear the TLB, except on rare occasions.
Similar problems can occur for a cache, since on a process switch the cache will contain data from the running process. These problems arise in different ways for physically addressed and virtually addressed caches, and a variety of different solutions, such as process identifiers, are used to ensure that a process gets its own data.
Handling TLB Misses and Page faults
Although the translation of virtual to physical addresses with a TLB is straightforward when we get a TLB hit, handling TLB misses and page faults is more complex. A TLB miss occurs when no entry in the TLB matches a virtual address. A TLB miss can indicate one of two possibilities:
1. The page is present in memory, and we need only create the missing TLB entry.
2. The page is not present in memory, and we need to transfer control to the operating system to deal with a page fault.
How do we know which of these two circumstances has occurred? When we process the TJB miss, we will look for a page table entry to bring into the TLB. If the matching page table entry has a valid bit that is turned off, then the corresponding page is not in memory and we have a page fault, rather than a TLB miss. If the valid bit is on, we can simply retrieve the desired entry.
A TLB miss can be handled in software or hardware because it will require only a short sequence of operations to copy a valid page table entry from memory into the TLB. MIPS traditionally handles a TLB miss in software. It brings in page table entry from memory and then re-executes the instruction that caused the TLB miss. Upon re-executing, it will get a TLB hit. If the page table entry indicates the page is not in memory, this time it will get a page fault exception.
Handling a TLB miss or a page fault requires using the exception mechanism to interrupt the active process, transferring control to the operating system, and later resuming execution of the interrupted process. A page fault will be recognized sometime during the clock cycle to access memory. To restart the instruction after the page fault is handled, the program counter of the instruction that caused the page fault must be saved. Just as in Chapter 4, the exception program counter (EPC) is used to hold this value.
In addition, a TLB miss or page fault exception must be asserted by the end of the same clock cycle that the memory access occurs, so that the next clock cycle will begin exception rather than continue normal instruction execution. If the page fault was not recognized in this clock, load instruction could overwrite a register, and this could be disastrous when we try to restart the instruction. For example, consider the instruction 1 w $ 1,0 ($1) : the computer must be able to prevent the write pipeline stage from occurring; otherwise, it could not properly restart the instruction, since the contents of $1 would have been destroyed. A similar complication arises on stores. We must prevent the write info memory from actually completing when there is a page fault; this is usually done by deasserting the write control line to the memory.
Between the time we begin executing the exception handler in the operating system and the time that the operating system has saved all the state of the process, the operating system is particularly vulnerable. For example, if another exception occurred when we were processing the first exception in the operating system, the control unit would overwrite the exception program counter, making it impossible to return to the instruction that caused the page fault! We can avoid this disaster by providing the ability to disable and enable exceptions. When an exception first occurs, the processor sets a bit that disables all other exceptions; this could happen at the same time the processor mode bit. The operating system will then save just enough state to allow it to recover if another exception occurs-namely, the exception counter (ERC) and Cause registers. ERC and Cause are two of the special control registers that help with exceptions, TLB misses, and page faults; Figure 5.27 shows the rest. The operating system can then re-enable exceptions. These steps make sure that exceptions will not cause the processor to lose any state and thereby be unable to restart execution of the interrupting instruction.
Once the operating system knows the virtual address that caused the page fault, it must complete three stars:
Of course, this last step will take millions clock cycles (so will the second if the replaced page is dirty); accordingly, the operating system will usually select another process to execute in the processor until the disk access completes. Because the operating system has state of the process, it can freely give control of the processor to another process.
When the read of the page from disk is complete, the operating system can restore the state of the process that originally caused the page fault and execute the instruction that returns from the exception. This instruction will reset the processor from kernel to user mode, as well as restore the program counter. The user process then re-executes the instruction that faulted, accesses the requested page successfully, and continues execution.
Page fault exceptions for data accesses are difficult to implement properly in a processor because of a combination of three characteristics:
Making instructions restartable, so that the exception can be handled and the instruction later continued, is relatively easy in an architecture like the MIPS. Because each instruction writes only one data item and this write occurs at the end of instruction cycle , we can simply prevent the instruction from completing (by not writing) and restart the instruction at the beginning.
Let’s look in more detail at MIPS. When the TLB miss occurs, the MIPS hardware saves the page number of the reference in a special register called BadVAddr and generates the exception.
The exception invokes the operating system, which handles the miss in software. Control is transferred to address 8000 0000hex, the location of the TLB miss handler. To find the physical address for the missing page, the TLB miss routine indexes the page table using the page number of the virtual address and the page table register, which indicates the starting address of the active process page table. To make this indexing fast, MIPS hardware places everything you need in special Context register: the upper 12 bits have the address of the base of the page table, and the next 18 bits have the virtual address of the missing page. Each page table entry is one word, so the last 2 bits are 0. Thus, the first two instructions copy the Context register into the kernel temporary register $k1 and then load the page table entry from that address into $k1. Recall that $k0 and $k1 are reserved for the operating system to use without saving; a major reason for this conversion is to make the TLB miss handler fast.
As shown above, MIPS has a special set of system instructions to update the TLB. The instruction tlbwr copies from control register EntryLo into the TLB entry selected by the control register Random. Random implements random replacement, so it is basically a dree-running counter. A TLB miss takes about a dozen clock cycles.
Note that the TLB miss handler does not check to see if the page table entry is valid. Because the exception for TLB entry missing is much more frequent than a page fault, the operating system loads the TLB from the page table without examining the entry and restarts the instruction. If the entry is invalid, another and different exception occurs, and the operating system recognizes the page fault. This method makes the frequent case of the TLB miss fast, at a slight performance penalty for the infrequent case of a page fault.
Once the process that generated the page fault has been interrupted, it transfers control to 8000 0000hex, a different address than the TLB handler. This is the general address for exception; TLB miss has a special entry point to lower the penalty for a TLB miss. The operating system uses the exception Cause register to diagnose the cause of the exception. Because the exception is a page fault, the operating system knows the expensive processing will be required. Thus, like a TLB miss, it saves the entire state of the active process. This state includes all the general-purpose and floating-point registers, the page table address register, the EPC, and the exception Cause register. Since exception handlers do not usually use the floating-point registers, the general entry point does not save them, leaving that to the few handlers that need them.
Figure 5.28 sketches the MIPS code of an exception handler. Note that we save and restore the state in MIPS code, taking care when we enable and disable exceptions, but we invoke C code to handle the particular exception.
The virtual address that caused page fault was an instruction of data fault. The address of the instruction that generated the fault is in the EPC. If it was an instruction page fault, the EPC contains the virtual address of the faulting page; otherwise, the faulting virtual address can be computed by examining the instruction (whose address is in the EPC) to find the base register and offset field.
Elaboration: This simplified version assumes that the stack pointer (sp) is valid. To avoid the problem of the page fault during this lower-level exception code, MIPS sets aside a portion of its address space cannot have page faults, called unmapped. The operating system places the exception entry point code and exception stack in unmapped memory. MIPS hardware translates virtual addresses 8000 0000hex to BFFF FFFFhex to physical addresses simply by ignoring the upper bits of the virtual address, thereby placing these addresses in the low part of physical memory. Thus, the operating system places exception entry points and exception stacks in unmapped memory.
Elaboration: The code in Figure 5.28 shows the MIPS-32 exception return sequence. The older MIPS-I architecture uses rfe and jr instead of eret.
Elaboration: For processes with more complex instructions that can touch many memory locations and write many data items, making instructions restartable is much harder. Processing one instruction may generate a number of page faults in the middle of the instruction. For example, x86 processors have block move instructions that touch thousands of data words. In such processors, instructions cannot e restarted from the beginning, as we do for MIPS instructions. Instead, the instruction must be interrupted and later continued midstream in its exception. Resuming an instruction in the middle of its execution usually requires saving some special state, processing the exception, and restoring that special state. Making the work properly requires careful and detailed coordination between the exception-handling code in the operating system and the hardware.
Summary
Virtual memory is the name for the level of memory hierarchy that manages caching between the main memory and disk. Virtual memory allows a single program to expand its address space beyond the limits of main memory. More importantly, virtual memory supports sharing of the main memory among multiple, simultaneously active processes, in a protected manner.
Managing the memory hierarchy between main memory and disk is challenging because of the high cost of page faults. Several techniques are used to reduce the miss rate:
Writes to disk are expensive, so virtual memory uses a write-back scheme and also tracks whether a page is uncharged (using a dirty bit) to avoid writing uncharged pages back to disk.
The virtual memory mechanism provides address translation from a virtual address used by the program to the physical address space used for accessing memory. This address translation allows protected sharing of the main memory and provides several additional benefits, such as simplifying memory allocation. Ensuring that processes are protected from each other requires that only the operating system can change the address translations, which is implemented by preventing user programs from changing the page tables. Controlled sharing of pages among processes can be implemented with the help of the operating system and access bits in the page table that indicate whether the user program has read or write access to a page.
If a processor had to access a page table resident in memory to translate every access, virtual memory would be too expensive, as caches would be pointless! Instead, a TLB acts as a cache for translations from virtual to physical using the translations in the TLB.
Caches, virtual memory, and TLBs all rely on a common set of principles and policies. The next section discusses this common framework.
Understanding Program Performance
Although virtual memory was invented to enable a small memory to act as a large one, the performance difference between disk and memory means that if a program routinely accesses more virtual memory than it has physical memory, it will run very slowly. Such a program would be continuously swapping pages between memory and disk, called thrashing. Thrashing is a disaster if it occurs, but it is rare. If your program thrashes, the easiest solution is to run it on a computer with more memory of buy more memory for your computer. A more complex choice is to re-examine your algorithm and data structures to see if you can change the locality and thereby reduce the number of pages that your program uses simultaneously. This set of popular pages is informally called the working set.
A more common performance program is TLB misses. Since the TLB might handle only 32-64 page entries at a time, a program would easily see a high TLB miss rate, as the processor may access less than a quarter megabyte directly: 64x4 KB = 0.25 MB. For example, TLB misses are often a challenge for Radix Sort. To try to alleviate this problem, most computer architectures now support variable page sizes. For example, in addition to the standard 4 KB page, MIPS hardware supports 16 KB, 64 KB, 256 KB, 1 MB, 4 MB, 16 MB, 64 MB, and 256 MB pages. Hence, if a program uses large page sizes, it can access more memory directly without TLB misses.
The practical challenge is getting the operating system to allow programs to select these larger page sizes. Once again, the more complex solution to reducing TLB misses is to re-examine the algorithm and data structures to reduce the working set of pages; given the importance of memory accesses to performance and the frequency of TLB misses, some programs with large working sets have been redesigned with that goal.
Check yourself
A Common Framework For Memory Hierarchies
By now, you’ve recognized that the different types of memory hierarchies share a great deal on common. Although many of the aspects of memory hierarchies differ qualitatively, many of the policies and features that determine how a hierarchy functions are similar qualitatively. Figure 5.29 shows how some of the quantitative characteristics of memory hierarchies can differ. In the rest of this section, we will discuss the common operational alternatives for memory hierarchies, and now these determine their behavior. We will examine these policies as a series of for questions that apply between any two levels of a memory hierarchy, although for simplicity we will primarily use terminology for caches.
Информация о работе Лексико-семантичні особливості перекладу науково-технічної літератури