Interrupts
The program enters a wait loop in which it repeatedly tests the device status. During this period, the processor is not performing any useful computation. There are many situations where other tasks can be performed while waiting for an I/O device to become ready. To allow this to happen, we can arrange for the I/O device to alert the processor when it becomes ready. It can do so by sending a hardware signal called an interrupt to the processor.
Common classes of interrupts:
• Program – instruction execution – overflow, division by 0, segmentation fault
• Timer – internal processor timer – processing on time intervals
• I/O – I/O controller – normal operation completion, error condition
• Hardware failure – power failure, memory parity error
Single Level Interrupts
In a single level interrupts there can be many interrupting devices. But all interrupt requests are made by a single input pin of CPU. When interrupted CPU has to poll the I/O port is identify the request device. Polling software routine that checks the logic state of each device. Once the interrupting I/O port is indentified the CPU will service it. And then return to task. It was performing before the interrupt.
Following figure shows the level interrupt system, In which interrupt requests from all the devices are logically ORed and connected to interrupt input of the processor. Thus, the interrupt request from any device is routed to processor interrupt input. After getting interrupted, processor identifies requesting device by reading interrupt status of each device. Multi Level Interrupts
In multi level interrupts, processor has more than one interrupt pins. The I/O devices are tied to the individual interrupt pins. Thus, the interrupts can be immediately identified by the CPU upon receiving an interrupt request from it. This allows processor to go directly to that I/O device and service it without having to poll first. This obviously saves time in processing interrupts.
In multi level interrupts system, when a processor is interrupted, it stops executing its current program and call a special routine “services” the interrupt. The event that causes the interruption is called interrupt and the special interrupt which is executed is called interrupt service routine ISR. When asynchronous input is asserted a special sequence in control logic begins. 1. The processor completes its current instruction. No instruction is cut of in the middle of its execution. 2. The program counter’s current contents are stored on the stack. Remember during the execution of instruction the program counter is pointing to the memory location for the next instruction. 3. The program counter is loaded with the address of an interrupt service routine. 4. Program execution continues with the instruction taken from the memory location pointed by the new program contents. 5. The interrupt continues to execute until the return instruction is executed. 6. After execution of the RET instruction processor gets the old address of the next instruction from where the interrupt service routine was called of the program counter from the stack and puts it back onto the program counter. This allows the interrupted program to continue at the instruction following the one where it was interrupted.
Priority Interrupt: A priority interrupt is a system that establishes a priority over the various sources to determine the condition which is to be serviced first, when two or more requests arrive simultaneously.
The system may also determine which conditions are permitted to interrupt the computer while another interrupt is being serviced. Higher priority interrupt levels are assigned to requests which, if delay or interrupted, could have serious consequences. When two devices interrupt the computer at the same time, the computer services the device, with the higher priority first.
Establishing the priority of simultaneous interrupts can be done by software or hardware. A polling procedure is used to identify the highest priority source by software means. In this method there is one common branch addresses for all interrupts the Program that takes care of interrupts begins of the branch address and polls the interrupt sources in sequence. The order in which they are tested determines the priority of each interrupt.
The initial service routine for all interrupts consists of a program that tests the interrupt sources in sequence and branches to one of many possible services routines. The particular service routine reached belongs to the highest-priority device among all devices interrupted the computer. The drawback of the software method is that, the time required to poll them can exceed the time available to service the I/O device, if there are many interrupts. In this situation a hardware priority-interrupt can be used to speed up the operation.
A hardware priority-interrupt unit functions as an overall manager in an interrupt system environment. Each interrupt source has its own interrupt vector to access its own service routine directly, to speed up the operation. The hardware priority function can be established by either a serial or a parallel connection of interrupt lines. The serial connection is also known as the daisy-chaining method.
Daisy-chaining Priority: The daisy-chaining method of establishing priority consists of a serial connection of all devices that request an interrupt. The device with the highest priority is placed in the first position, followed by lower-priority devices up to the device with the lowest priority, which is placed last in the chain. The following figure demonstrates the method of connection between three devices and the CPU.
The interrupt request line is common to all devices and forms a wired logic connection. If any device has its interrupt signal in the low-level state, the interrupt line goes to the low-level state and enables the interrupt input in the CPU. The interrupt line stays in the high-level state and no interrupts are recognized by the CPU, only when no interrupts are pending. And is equivalent to a negative logic or operation. The CPU responds to an interrupt request by enabling the interrupt acknowledge line.
This signal is received by device 1 at its PI(priority in) input. The acknowledge signal passes on the next device through the PO (Priority Out) Output only if device 1 is not requesting an interrupt. It blocks the acknowledgement signal from the next device by placing a 0 in the PO output, if device I has a pending interrupt. It then proceeds to insert its own interrupt vector address (VAD) into the data bus for the CPU to use, during the interrupt cycle.
A device with a 0 in its PI input generate a O in its PO output to inform the next-lower-priority device that the acknowledge signal has been blocked. A device that is requesting an interrupt and has a 1 in its PI input will intercept the acknowledge signal by placing a O in its PO output. It transmits the acknowledge signal to the next device by placing in 1 in its PC Output, if the device does not have pending interrupts.
Thus the device with PI = 1 and PO = 0 is the one with the highest priority that is requesting an interrupt, and this device places its vector address (VAD) on the data bus. The daisy chain arrangement gives the highest priority to the device that receives the interrupt acknowledge signal from the CPU. The farther the device is from the first position; the lower is its priority.
General Understanding
Interrupts is actually a design which enhances the processor efficiency, Like when we give an input to the processor performs some computation and produces some output but in meanwhile you can interact with the system by interrupting the running process or you can start and run another process this reactiveness is due to interrupts and in this way it enhances processor efficiency. Sometime we have such operations in which processor have to wait for the hardware to complete the task in such case we can send an interrupt to the processor to perform some other task instead of waiting for hardware to complete its task. In this way CPU time is saved.
As in multiple interrupts we got two strategies one is enable disable and the other is set priorities for interrupts.
In enable disable while one interrupt is being processed if at that time some other interrupt occurs we simply re ject that interrupt and keep on processing the current interrupt and put that second interrupt in pending state.
In set priorities method we set priorities for each interrupt module .once we set priorities than we process these interrupts according to their priorities .in this setup higher priorities interrupts can rule over low priorities interrupts .for example one interrupt is in under process stage and at that time one more interrupt occurs that is of higher priority than the previous interrupt will be paused and the new interrupt will be executed first and then the low priority interrupt will be executed after completing higher priority interrupt. When processor stops the execution of a interrupt due to some other higher priorities interrupt than the low priorities interrupt context is saved into the stacks .as soon as an interrupt is received the processor calls a specific ISR for that interrupt. Following figure explains the instruction cycle state diagram with interrupts.
Examples:
Microcontrollers use interrupts to prioritize between the tasks and to ensure that certain peripheral modules are serviced fast. Further, interrupts can be used to reduce the power consumption of a microcontroller, so that the device is in low power mode until a certain interrupt causing condition occurs.
Most external devices are much slower than the processor. Suppose that the
Processor is transferring data to a printer. After each write operation, the processor
Must pause and remain idle until the printer catches up. The length of this pause
May be on the order of many hundreds or even thousands on instruction cycles that do not involve memory. Clearly, this is wasteful use of the processor.
Reference:
Computer Architecture by A.P.Godse, D.A.Godse
http://www.ustudy.in/node/8468