# Base+Offset notation (or why we start counting with zero)

Every now and again, I get the question about why we starting counting things such as arrays, offsets, etc. with zero (0) and not one (1). The answer is simple, when specifying a data structure, we normally specify the byte (or whatever unit) offset for the start of a field for a specific data structure. Think how a computer would get to a specific element of a list. Let’s say we want the third element in the list (the offset is 2). The start of the list is the first element. We move to the next element, which is at offset 1, and then we move to the next element (the third item) which is offset 2. We now read the contents of that element. When we start counting with offsets, the first byte is zero bytes from the start (since it’s the first byte). Here is what it looks like graphically:

```0 1 2 3 4 5 6
A B C D E F G
^     (offset from A = 0)```

Let’s say we want to get to element C (the third element, offset 2). If “A” is the start, then it is zero bytes from the start. That means, that A’s offset is 0. There is a caret (^) beneath “A” to show the current position. Next we move forward one element (offset = 1) to the second element, which looks like:

```0 1 2 3 4 5 6
A B C D E F G
^   (offset from A = 1)```

Finally, we move forward one more element (offset = 2) to the third element, which looks like:

```0 1 2 3 4 5 6
A B C D E F G
^ (offset from A = 2)```

For example, the FAT boot sector starts out like this: (Note: this was taken from http://support.microsoft.com/kb/q140418/)

```Field                 Offset     Length
-----                 ------     ------
Bytes Per Sector      11         2
Sectors Per Cluster   13         1
Reserved Sectors      14         2
FATs                  16         1
Root Entries          17         2
Small Sectors         19         2
Media Descriptor      21         1
Sectors Per FAT       22         2
Sectors Per Track     24         2