# how exactly does a computer program work

Post on 16-Jan-2016

238 views

Category:

## Documents

Embed Size (px)

DESCRIPTION

Computer Program Work

TRANSCRIPT

How exactly does a computer program work?1. The central part of a computer is called CPU (Central Processing Unit). Modern CPUs include a lot of stuff, but simply explained, CPU is a bunch of microelectronics that can execute instructions.

2. You can think of CPU as a bunch of Logical transistor gates. A lot of them. These gates consist of Transistors and other electronic parts that implement very simple logical operators: AND, OR and NOT. Very simple CPUs used to have thousands of transistors. To grasp the complexity of modern CPUs, consider that some of them have over 2.5 billion transistors. As you might have guessed, they're tiny. But here are some not-so-tiny ones:

(note three connectors - two inputs, one output)3. Transistors operate on electric currents. This is where your 1s and 0s come from. Current exists - 1. No current - 0. Logical operations alter these currents. For example, AND means that the output will have current only if both its inputs have it (1 AND 1 = 1). OR means only one is enough (1 OR 0 = 1). NOT just reverses the existence of current (NOT 0 = 1).4. Believe it or not, these three logical operators, when you combine them, are enough to implement all logic, including arithmetic operations on integers (+, -, /, *), and consequently pretty much everything else. It's just that you have to have a lot of them combined together. You can think of it this way: numbers are represented as 1s and 0s in Binary numeral system, so addition is just a set of logical operations between 1s and 0s the two numbers consist of. Here's for example, a couple of transistors which add two bits together. Combine more of those and you can add large numbers.

5. OK, this is CPU and its logic. But where do we get instructions from? Since we know how operations such as addition are implemented, we can now give them some sort of code. For example, we can agree that 45 means add two numbers, and 87 means divide them (of course, we should also specify exactly what). What CPU does is reads these numbers (code) and executes the corresponding instruction. 6. In modern CPUs, what I just described is called Microcode, and microcode contains the most basic instructions. Microcode then is used to implement a more complicated set of instructions, which is called Machine code. Machine code is also numbers, it's just that the instructions are somewhat more complicated. Just as an imaginary example, let's say the machine code instruction is 76 2 3 4. It could mean "add together two numbers (opcode 76) from memory in positions 2 and 3, and write the result into position 4. Its implementation in microcode contains much simpler, atomic commands such as "retrieve number from position 2". Then the CPU executes them.7. Now that I mentioned memory. Memory is easier to understand - it just contains a lot of these 0s and 1s which can be changed. And it looks like nothing exciting, too:

In almost all existing computer architectures (thanks to the guy named Von Neumann), the instruction code we discussed above is stored in memory, just like any other data. The instructions that I mentioned before (76 2 3 4) would also be stored in memory. To turn on the computer (CPU), you have to point it to some location in memory with instructions and say - Go! It will read instructions one by one and execute them in the manner I described above.8. Of course, since computers understand only electrical current or lack thereof (1s and 0s), all data including those instruction numbers is stored in Binary numeral system. Binary system is actually quite easy to understand. We're using Decimal system, because we have 10 fingers. Each number can be one of 10 possible digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. There's nothing special about ten. You can imagine some aliens with 8 fingers, could use a numeral system with only eight digits: 0, 1, 2, 3, 4, 5, 6, 7.

I use Octal (8) numeral system!

You can think of binary as a numeral system which some very unfortunate ET with only two fingers would use. Here's an example of how to convert from binary to decimal:

9. Phew! Now we're getting somewhere. But we're not nearly done yet. You see, writing numbers even in decimal system is very, very inconvenient and error prone. So, the first thing that appears is wikipedia.orgAssembly language. Assembly language is just mnemonic representation of machine code. For example, our favorite instruction 76 2 3 4, could be represented as something like:

ADD [2], [3] -> [4] (not real assembly code)10. The first Assembler (the program that translates assembly language into machine code) of course, had to be written in pure code (numbers). But it's relatively easy, because assembly commands map almost 1:1 to machine code.11. OK, now it's a little better, but assembly language is still way, way too verbose. Even simple programs, such as read user input, add 1 to it, and print it back contain a lot of commands in assembly language. It's very low level and lacks higher abstractions.12. Enter Programming languages. They are much more high level, and you can express complicated stuff very succinctly compared to assembly languages. You could write something like that:

R = 2.0 print "Square of circle with radius %f is %f" % (R, 3.14*R*R)