## Recovering a FAT filesystem directory entry in five phases

This is the last in a series of posts about five phases that digital forensics tools go through to recover data structures (digital evidence) from a stream of bytes. The first post covered fundamental concepts of data structures, as well as a high level overview of the phases. The second post examined each phase in more depth. This post applies the five phases to recovering a directory entry from a FAT file system.

The directory entry we’ll be recovering is from the Honeynet Scan of the Month #24. You can download the file by visiting the SOTM24 page. The entry we’ll recover is the 3rd directory entry in the root directory (the short name entry for _IMMYJ~1.DOC, istat number 5.)

Location

The first step is to locate the entry. It’s at byte offset 0x2640 (9792 decimal). How do we know this? Well assuming we know we want the third entry in the root directory, we can calculate the offset using values from the boot sector, as well as the fact that each directory entry is 0x20 (32 decimal) bytes long (this piece of information came from the FAT file system specification.) There is an implicit step that we skipped, recovering the boot sector (so we could use the values). To keep this post to a (semi) reasonable length, we’ll skip this step. It is fairly straightforward though. The calculation to locate the third entry in the root directory of the image file is:

3rd entry in root directory = (bytes per sector) * [(length of reserved area) + [(number of FATs) * (size of one FAT)]] + (offset of 3rd directory entry)

bytes per sector = 0x200 (512 decimal)

length of reserved area = 1 sector

number of FATs = 2

size of one FAT = 9 sectors

size of one directory entry = 0x20 (32 decimal) bytes

offset of 3rd directory entry = size of one directory entry *2 (start at 0 since it’s an offset)

3rd entry in root directory = 0x200 * (1 + (2 * 9))+ (0x20 * 2) = 0x2640 (9792 decimal)

Using xxd, we can see the hex dump for the 3rd directory entry:

```\$ xxd -g 1 -u -l 0x20 -s 0x2640 image 0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..```

Extraction

Continuing to the extraction phase, we need to extract each field. For a short name directory entry, there are roughly 12 fields (depending on whether you consider the first character of the file name as it’s own field.) The multibyte fields are stored in little endian, so we’ll need to reverse the bytes that we see in the output from xxd.

To start, the first field we’ll consider is the name of the file. This starts at offset 0 (relative to the start of the data structure) and is 11 bytes long. It’s the ASCII representation of the name. ``` ```

` 0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
File name = _IMMYJ~1.DOC (_ represents the byte 0xE5)

The next field is the attributes field, which is at offset 12 and 1 byte long. It’s an integer and a bit field, so we’ll examine it further in the decoding phase.

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Attributes = 0x20

Continuing in this manner, we can extract the rest of the fields:

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Reserved = 0x00

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Creation time (hundredths of a second) = 0x68

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Creation time = 0x4638

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Creation date = 0x2D2B

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Access date = 0x2D2B

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
High word of first cluster = 0x0000

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Modification time = 0x754F

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Modification date = 0x2C8F

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Low word of first cluster = 0x0002

`0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F`
` 0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..`
Size of file = 0x00005000 (bytes)

Decoding

With the various fields extracted, we can decode the various bit-fields. Specifically the attributes, dates, and times fields. The attributes field is a single byte, with the following bits used to represent the various attributes:

• Bit 0: Read only
• Bit 1: Hidden
• Bit 2: System
• Bit 3: Volume label
• Bit 4: Directory
• Bit 5: Archive
• Bits 6 and 7: Unused
• Bits 0, 1, 2, 3: Long name

When decoding the fields in a FAT file system, the right most bit is considered bit 0. To specify a long name entry, bits 0, 1, 2, and 3 would be set. The value we extracted from the example was 0x20 or 0010 0000 in binary. The bit at offset 5 (starting from the right) is set, and represents the “Archive” attribute.

Date fields for a FAT directory entry are encoded in two byte values, and groups of bits are used to represent the various sub-fields. The layout for all date fields (modification, access, and creation) is:

• Bits 0-4: Day
• Bits 5-8: Month
• Bits 9-15: Year

Using this knowledge, we can decode the creation date. The value we extracted was 0x2D2B which is 0010 1101 0010 1011 in binary. The day, month, and year fields are thus decoded as:

`0010 1101 0010 1011`
Creation day: 01011 binary = 0xB = 11 decimal

`0010 1101 0010 1011`
Creation month: 1001 binary = 0x9 = 9 decimal

`0010 1101 0010 1011`
Creation year: 0010110 binary = 0x16 = 22 decimal

A similar process can be applied to the access and modification dates. The value we extracted for the access date was also 0x2D2B, and consequently the access day, month, and year values are identical to the respective fields for the creation date. The value we extracted for the modification date was 0x2C8F (0010 1100 1000 1111 in binary). The decoded day, month, and year fields are:

`0010 1100 1000 1111`
Modification day: 01111 binary = 0xF = 15 decimal

`0010 1100 1000 1111`
Modification month: 0100 binary = 0x4 = 4 decimal

`0010 1100 1000 1111`
Modification year: 0010110 binary = 0x16 = 22 decimal

You might have noticed the year values seem somewhat small (i.e. 22). This is because the value for the year field is an offset starting from the year 1980. This means that in order to properly interpret the year field, the value 1980 (0x7BC) needs to be added to the value of the year field. This is done during the next phase (interpretation).

The time fields in a directory entry, similar to the date fields, are encoded in two byte values, with groups of bits used to represent the various sub-fields. The layout to decode a time field is:

• Bits 0-4: Seconds
• Bits 5-10: Minutes
• Bits 11-15: Hours

Recall that we extracted the value 0x4638 (0100 0110 0011 1000 in binary) for the creation time. Thus the decoded seconds, minutes, and hours fields are:

`0100 0110 0011 1000`
Creation seconds = 11000 binary = 0x18 = 24 decimal

`0100 0110 0011 1000`
Creation minutes = 110001 binary = 0x31 = 49 decimal

`0100 0110 0011 1000`
Creation hours = 01000 binary = 0x8 = 8 decimal

The last value we need to decode is the modification time. The bit-field layout is the same for the creation time. The value we extracted for the modification time was 0x754F (0111 0101 0100 1111 in binary). The decoded seconds, minutes, and hours fields for the modification time are:

`0111 0101 0100 1111`
Modification seconds = 01111 binary = 0xF = 15 decimal

`0111 0101 0100 1111`
Modification minutes = 101010 binary = 0x2A = 42 decimal

`0111 0101 0100 1111`
Modification hours = 01110 binary = 0xE = 14 decimal

Interpretation

Now that we’ve finished extracting and decoding the various fields, we can move into the interpretation phase. The values for the years and seconds fields need to be interpreted. The value of the years field is the offset from 1980 (0x7BC) and the seconds field is the number of seconds divided by two. Consequently, we’ll need to add 0x7BC to each year field and multiply each second field by two. The newly calculated years and seconds fields are:

• Creation year = 22 + 1980 = 2002
• Access year = 22 + 1980 = 2002
• Modification year = 22 + 1980 = 2002
• Creation seconds = 24 * 2 = 48
• Modification seconds = 15 * 2 = 30

We also need to calculate the first cluster of the file, which simply requires concatenating the high and the low words. Since the high word is 0x0000, the value for the first cluster of the file is the value of the low word (0x0002).

In the next phase (reconstruction) we’ll use Python, so there are a few additional values that are useful to calculate. The first order of business is to account for the hundredths of a second associated with the seconds field for creation time. The value we extracted for the hundredths of a second for creation time was 0x68 (104 decimal). Since this value is greater than 100 we can add 1 to the seconds field of creation time. Our new creation seconds field is:

• Creation seconds = 48 + 1 = 49

This still leaves four hundredths of a second left over. Since we’ll be reconstructing this in Python, we’ll use the Python time class which accepts values for hours, minutes, seconds, and microseconds. To convert the remaining four hundredths of a second to microseconds multiply by 10000. The value for creation microseconds is:

• Creation microseconds = 4 * 10000 = 40000

The other calculation is to convert the attributes field into a string. This is purely arbitrary, and is being done for display purposes. So our new attributes value is:

• Attributes = “Archive”

Reconstruction

This is the final phase of recovering our directory entry. To keep things simple, we’ll reconstruct the data structure as a Python dictionary. Most applications would likely use a Python object, and doing so is a fairly straight forward translation. Here is a snippet of Python code to create a dictionary with the extracted, decoded, and interpreted values (don’t type the >>> or …):
``` \$ python >>> from datetime import date, time >>> dirEntry = dict() >>> dirEntry["File Name"] = "xE5IMMYJ~1DOC" >>> dirEntry["Attributes"] = "Archive" >>> dirEntry["Reserved Byte"] = 0x00 >>> dirEntry["Creation Time"] = time(8, 49, 49, 40000) >>> dirEntry["Creation Date"] = date(2002, 9, 11) >>> dirEntry["Access Date"] = date(2002, 9, 11) >>> dirEntry["First Cluster"] = 2 >>> dirEntry["Modification Time"] = time(14, 42, 30) >>> dirEntry["Modification Date"] = date(2002, 4, 15) >>> dirEntry["size"] = 0x5000 >>>```

If you wanted to print out the values in a (semi) formatted fashion you could use the following Python code:
``` >>> for key in dirEntry.keys(): ... print "%s == %s" % (key, str(dirEntry[key])) ... ```
And you would get the following output
``` Modification Date == 2002-04-15 Creation Date == 2002-09-11 First Cluster == 2 File Name == ?IMMYJ~1DOC Creation Time == 08:49:49.040000 Access Date == 2002-09-11 Reserved Byte == 0 Modification Time == 14:42:30 Attributes == Archive size == 20480 >>> ```

At this point, there are a few additional fields that could have been calculated. For instance, the file name could have been broken into the respective 8.3 (base and extension) components. It might also be useful to calculate the allocation status of the associated file (in this case it would be unallocated). These are left as exercises for the reader ;).

This concludes the 3-post series on recovering data structures from a stream of bytes. Hopefully the example helped clarify the roles and activities of each of the five phases. Realize that the five phases aren’t specific to recovering file system data structures, they apply to network traffic, code, file formats, etc.

## The five phases of recovering digital evidence

This is the second post in a series about the five phases of recovering data structures from a stream of bytes (a form of digital evidence recovery). In the last post we discussed what data structures were, how they related to digital forensics, and a high level overview of the five phases of recovery. In this post we’ll examine each of the five phases in finer grained detail.

In the previous post, we defined five phases a tool (or human if they’re that unlucky) goes through to recover data structures. They are:

1. Location
2. Extraction
3. Decoding
4. Interpretation
5. Reconstruction

We’ll now examine each phase in more detail…

Location

The first step in recovering a data structure from a stream of bytes is to locate the data structure (or at least the fields of the data structure we’re interested in.) Currently, there are 3 different commonly used methods for location:

1. Fixed offset
2. Calculation
3. Iteration

The first method is useful when the data structure is at a fixed location relative to a defined starting point. This is the case for a FAT boot sector, which is located in the first 512 bytes of a partition. The second method (calculation) uses values from one or more other fields (possibly in other data structures) to calculate the location of a data structure (or field). The last method (iteration) examines “chunks” of data, and attempts to identify if the chunks are “valid”, meaning the (eventual) interpretation of the chunk fits predetermined rules for a given data structure.

These three methods aren’t mutually exclusive, meaning they can be combined and intermixed. It might be the case that locating a data structure requires all three methods. For example the ils tool from Sleuthkit, when run against a FAT file system ils first recovers the boot sector, then calculates the start of the data region, and finally iterates over chunks of data in the data region, attempting to validate the chunks as directory entries.

While all three methods require some form of a priori knowledge, the third method (iteration) isn’t necessarily dependent on knowing the fixed offset of a data structure. From a purist perspective, iteration itself really is location. Iteration yields a set of possible locations, as opposed to the first two methods which yield a single location. The validation aspect of iteration is really a combination of the rest of the phases (extraction, decoding, interpretation and reconstruction) combined with post recovery analysis.

Another method for location, that is less common than the previous three is location by outside knowledge from some source. This could be a person who has already performed location, or it could be the source that created the data structure (e.g. the operating system). Due to the flexible and dynamic nature of computing devices, this isn’t commonly used, but it is a possible method.

Extraction

Once a data structure (or the relevant fields) have been located, the next step is to extract the fields of the data structure out of the stream of bytes. Realize that the “extracting” is really the application of type information. The information from the stream is the same, but we’re using more information about how to access (and possibly manipulate) the value of the field(s). For example the string of bytes 0x64 and 0x53 can be extracted as an ASCII string composed of the characters “dS”, or it could be the (big endian) value 0x6453 (25683 decimal). The information remains the same, but how we access and manipulate the values (e.g. concatenation vs. addition) differs. Knowledge of the type of field provides the context for how to access and manipulate the value, which is used during later phases of decoding, interpretation, and reconstruction.

The extraction of a field that is composed of multiple bytes also requires knowledge of the order of the bytes, commonly referred to as the “endianess”, “byte ordering”, or “big vs. little endian”. Take for instance the 16-bit hexadecimal number 0x6453. Since this is a 16-bit number, we would need two bytes (assuming 8-bit bytes) to store this number. So the value 0x6453 is composed of the (two) bytes 0x64 and 0x53

It’s logical to think that these two bytes would be adjacent to each other in the stream of bytes, and they typically are. The question is now what is the order of the two bytes in the stream?

0x64, 0x53 (big endian)

or

0x53, 0x64 (little endian)

Obviously the order matters.

Decoding

After the relevant fields of a data structure have been located and extracted, it’s still possible further extraction is necessary, specifically for bit fields (e.g. flags, attributes, etc.) The difference between this phase and the extraction phase is that extraction extracts information from a stream of bytes and decoding extracts information from the extraction phase. Alternatively, the output from the extraction phase is used as the input to the decoding phase. Both phases however focus on extracting information. Computation using extracted information is reserved for later phases (interpretation and reconstruction).

Another reason to distinguish this phase from extraction is that most (if not all) computing devices can only read (at least) whole bytes, not individual bits. While a human with a hex dump could potentially extract a single bit, software would need to read (at least) a whole byte and extract the various bits within the byte(s).

There isn’t much that happens at this phase, as much of the activity focuses around accessing various bits.

Interpretation

The interpretation phase takes the output of the decoding phase (or the extraction phase if the decoding phase uses only identity functions) and performs various computations using the information. While extraction and decoding focus on extracting and decoding values, interpretation focuses on computation using the extracted (and decoded) values.

Two examples of interpretation are unit conversion, and the calculation of values used during the reconstruction phase. An example of unit conversion would be converting the seconds field of a FAT time stamp from it’s native format (seconds/2) to a more logical format (seconds). A useful computation for reconstruction might be to calculate the size of the FAT region (in bytes) for a FAT file system (bytes per sector * size of one FAT structure (in sectors) * number of FAT structures.)

Since this phase is used heavily by the reconstruction phase, it’s not uncommon to see this phase embodied in the code for reconstruction. However this phase is still a logically separate component.

Reconstruction

This is the last phase of recovering digital evidence. Information from previous phases is used to reconstruct a usable representation of the data structure (or at least the relevant fields.) Possible usable representations include:

• A language specific construct or class (e.g. Python date object or a C integer)
• Printable text (e.g. output from fsstat)
• A file (e.g. file carving)

The idea is that the output from this phase can be used for some further analysis (e.g. timeline generation, analyzing email headers, etc.) Some tools might also perform some error checking during reconstruction, failing if the entire data structure is unable to be properly recovered. While this might be useful in some automated scenarios, it has the downside of potentially missing useful information when only part of the structure is available or is valid.

At this point, we’ve gone into more detail of each phase and hopefully explained in enough depth the purpose and types of activity that happen in each. The next (and last) post in this series is an example of applying the five phases to recovering a short name directory entry from a FAT file system.

## How forensic tools recover digital evidence (data structures)

In a previous post I covered “The basics of how digital forensics tools work.” In that post, I mentioned that one of the steps an analysis tool has to do is to translate a stream of bytes into usable structures. This is the first in a series of three posts that examines this step (translating from a stream of bytes to usable structures) in more detail. In this post I’ll introduce the different phases that a tool (or human if they’re that unlucky) goes through when recovering digital evidence. The second post will go into more detail about each phase. Finally, the third post will show an example of translating a series of bytes into a usable data structure for a FAT file system directory entry.

Data Structures, Data Organization, and Digital Evidence

Data structures are central to computer science, and consequently bear importance to digital forensics. In The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd Edition), Donald Knuth provides the following definition for a data structure:

Data Structure: A table of data including structural relationships

In this sense, a “table of data” refers to how a data structure is composed. This definition does not imply that arrays are the only data structure (which would exclude other structures such as linked lists.) The units of information that compose a data structure are often referred to as fields. That is to say, a data structure is composed of one or more fields, where each field contains information, and the fields are adjacent (next) to each other in memory (RAM, hard disk, usb drive, etc.)

The information the fields contain falls into one of two categories, the data a user wishes to represent (e.g. the contents of a file), as well as the structural relationships (e.g. a pointer to the next item in a linked list.) It’s useful to think of the former (data) as data, and the latter (structural relationships) as metadata. Although the line between the two is not always clear, and depends on the context of interpretation. What may be considered data from one perspective, may be considered metadata from another perspective. An example of this would be a Microsoft Word document, which from a file system perspective is data. However, from the perspective of Microsoft Word, the file contains both data (the text) as well as metadata (the formatting, revision history, etc.)

The design of a data structure not only includes the order of the fields, but also the higher level design goals for the programs which access and manipulate the data structures. For instance, efficiency has long been a desirable aspect of many computer programs. With society’s increased dependence on computers, other higher level design goals such as security, multiple access, etc. have also become desirable. As a result, many data structures contain fields to accommodate these goals.

Another important aspect in computing is how to access and manipulate the data structures and their related fields. Knuth defines this under the term “data organization”:

Data Organization: A way to represent information in a data structure, together with algorithms that access and/or modify the structure.

An example of this would be a field that contains the bytes 0x68, 0x65, 0x6C, 0x6C, and 0x6F. One way to interpret these bytes is as the ASCII string “hello”. In another interpretation, these bytes can be the integer number 448378203247 (decimal). Which one is it? Well there are scenarios where either could be correct. To answer the question of correct interpretation requires information beyond just the data structure and field layout, hence the term data organization. Even with self-describing data structures, information about how to access and manipulate the “self-describing” parts (e.g. type “1” means this is a string) is still needed.

So where does all this information for data organization (and data structures) come from? There are a few common sources. Perhaps the first would be a document from the organization that designed the data structures and the software that accesses and manipulates them. This could be either a formal specification, or one or more informal documents (e.g. entries in a knowledge base.) Another source would be reverse engineering the code that accesses and manipulates the data structures.

If you’ve read through all of this, you’re might be asking “So how does this relate to digital forensics?” The idea is that data structures are a type of digital evidence. Realize that the term “digital evidence” is somewhat overloaded. In one context, a disk image is digital evidence (i.e. what was collected during evidence acquisition), and in another context, an email extracted from a disk image is digital evidence. This series focuses on the latter, digital evidence extracted from a stream of bytes. Typically this would occur during the analysis phase, although (especially with activities such as verification) this can occur prior to the evidence acquisition phase.

The 5 Phases

Now that we’ve talked about what data structures are and how they relate to digital forensics, lets see how to put this to use with our forensic tools. What we’re about to do is describe five abstract phases, meaning all tools may not implement them directly, and some tools don’t focus on all five phases. These phases can also serve as a methodology for recovering data structures, should you happen to be in the business of writing digital forensic tools.

1. Location
2. Extraction
3. Decoding
4. Interpretation
5. Reconstruction

The results of each phase are used as input for the next phase, in a linear fashion.

An example will help clarify each phase. Consider the recovery of a FAT directory entry from a disk image. The first task would be to locate the desired directory entry, which could be accomplished through different mechanisms such as calculation or iteration. The next task is to extract out the various fields of the data structure, such as the name, the date and time stamps, the attributes, etc. After the fields have been extracted, fields where individual bits represent sub fields can be decoded. In the example of the directory entry, this would be the attributes field, which denotes if a file is considered hidden, to be archived, a directory, etc. Once all of the fields have been extracted and decoded, they can be interpreted. For instance, the seconds field of a FAT time stamp is really the seconds divided by two, so the value must be multiplied by two. Finally, the data structure can be reconstructed using the facilities of the language of your choice, such as the time class in Python.

There are a few interesting points to note with recovery of data structures using the above methodology. First, not all tools go through all phases, at least not directly. For instance, file carving doesn’t directly care about data structures. Depending on how you look at it, file carving really does go through all five phases, it just uses an identify function. In addition, file carving does care about (parts of) data structures, it cares about the fields of the data structures that contain “user information”, not about the rest of the fields. In fact, much file carving is done with a built-in assumption about the data structure: that the fields that contain “user information” are stored in contiguous locations.

Another interesting point is the distinction between extraction, decoding, and interpretation. Briefly, extraction and decoding focus on extracting information (from stream of bytes and already extracted bytes respectively), whereas interpretation focuses on computation using extracted and decoded information. The next post will go into these distinctions in more depth.

A third and subtler point comes from the transition of data structures between different types of memory, notably from RAM to a secondary storage device such as a hard disk or USB thumb drive. Not all structural information may make the transition from RAM, and as a result is lost. For instance, a linked list data structure, which typically contains a pointer field to the next element in the list, may not record the pointer field when being written to disk. More often that not, such information isn’t necessary to read consistent data structures from disk, otherwise the data organization mechanism wouldn’t really be consistent and reliable. However, if an analysis scenario does require such information (it’s theoretically possible), the data structures would have to come directly from RAM, as opposed to after they’ve been written to disk. This problem doesn’t stem from the five phases, but instead stems from a loss of information during the transition from RAM to disk.

In the next post, we’ll cover each phase in more depth, and examine some of the different activities that can occur at each phase.

## Evaluating Forensic Tools: Beyond the GUI vs Text Flame War

One of the good old flamewars that comes up every now and again is which category of tools is “better”: graphical, console (e.g. interactive text-based), or command-line?

Each interface mechanism has its pros and cons, and when evaluating a tool, the interface mechanism used can make an impact on the usability of the tool. For instance displaying certain types of information (e.g. all of the picture files in a specific directory) naturally lend themselves to a graphical environment. On the other hand, it’s important to me to be able to use the keyboard to control the tool (using a mouse can often slow you down). The idea that graphical tools “waste CPU cycles” is pretty moot, considering the speed of current processors, and that much forensic work focuses on data sifting and analysis, which is heavily tied to I/O throughput.

Text based tools however do have the issue that paging through large chunks of data can be somewhat tedious (this is where I like using the scrollwheel, the ever-cool two-finger scroll on a Macbook, or even the “less” command.)

To me, there are more important issues than the type (e.g. graphical or text-based) of interface. Specifically some of the things I focus on are:

• What can the tool do?
• What are the limitations of the tool?
• How easy is it to automate the tool (getting data into the tool, controlling execution of the tool, and getting data out of the tool)?

The first two items really focus on the analysis capabilities of the tool (which can be “make-or-break” decisions by themselves), and the last item (really three items rolled into one) focuses on the automation capabilities of the tool.

The automation capabilities are often important because no single tool does everything, and an analyst’s toolkit is composed of a series of tools that have differing capabilities. Being able to automate the different tools in your toolkit (and being able to transfer data between multiple tools) is often a huge time saver.

Many tools have built-in scripting capabilities. For instance ProDiscover has ProScript, EnCase has EnScript, etc. Command line tools can typically be “wrapped” by another language. Autopsy for example, is a PERL script that wraps around the various Sleuthkit tools. While it is useful to be able to automate the execution of a tool, it’s also useful to be able to automate the import and export of data. Being able to programmatically extract the results of a tool and feed them as input (or further process them) allows you to combine multiple tools in your toolkit.

So to me, when evaluating a forensic tool the capabilities (and limitations) and the ease of automation are (often) more important than the interface.

## Copying 1s and 0s

I’ve been asked a few times over the past weeks about making multiple copies of disk images. Specifically, if I were to make a copy of a copy of a disk image, would the “quality” degrade? The short answer is no. It boils down to the idea of copying information from a digital format (as opposed to an analog format). Let’s say I write down the following string of ones and zeros on a 3×5 card:

101101

Now, if I take out another 3×5 card and copy over (by hand) the same string of ones and zeros, I now have two cards with the string:
101101

Finally, if I took out a third 3×5 card and copied yet again (by hand) the same string (from the second card) I would have three cards with the string:

101101

Assuming I had good handwriting, then each copy would be legible and have the same string of ones and zeros. I could continue on indefinitely in this manner, and each time the new 3×5 card would not suffer in “quality”.

However, instead of copying (by hand) the string of ones and zeros, I could have photocopied the first 3×5 card. This would yield a result of one 3×5 card, and a (slightly) degraded copy. If I then photocopied the copy, I would get a further degraded (third) copy.

So, copying images (or any digital information) verbatim (i.e. using a lossless transformation process) doesn’t degrade the quality of the information, read a “one” (from the source) write a “one” (to the destination). Where you might run into trouble is if the source or destination media has (physical) errors. So it’s always a good idea to verify your media before imaging. It’s also a good idea to use a tool that tells you (and even better if the tool logs) errors that it encounters during the imaging process.

## Exhibits from deposition of RIAA’s expert available online

Updating the previous post, the exhibits from the deposition are available at:

## Transcript of deposition of RIAA’s expert available online

In UMG v. Lindor, the RIAA’s expert was deposed on February 23rd 2007. A PDF copy of the transcript is available at ilrweb.com.

Source: Recording Industry vs The People blog.

## Planting evidence

The other day, Dimitris left a comment asking about how to determine if someone has altered the BIOS clock and placed a new file on the file system. In essence, this is “planting evidence”.

So, what might the side effects of this type of activity be? It’s difficult (if not impossible) to give an exact answer that will fit every time. Especially since there are different ways of planting evidence in this scenario. For example, the suspect could:

1. Power down the system
2. Change the BIOS clock
3. Power the system back up
4. Place the file on the system
5. Power the system back down
6. Fix the BIOS clock

Alternatively, the suspect could also:

1. Yank the power cord
2. Update the BIOS clock
3. Boot up with a bootable CD
4. Place the file on the system
5. Yank the power cord (again)
6. Fix the BIOS clock

It’s also possible to change the clock while the power is running. Again, it’s a different scenario with different possible side effects. Really, this needs to be evaluated on a case-by-case basis. However, the fundamental idea is to look for evidence of events that are somehow (directly or indirectly) tied to time, and see if you can find inconsistencies (a “glitch in the matrix” if you will.) The evidence can be both physical as well as digital. For example, if the “victim” were having lunch with the CEO and CIO during the time which the document was supposedly placed on the system, then the “victim” likely has a good alibi.

For digital evidence, there are 3 basic aspects of computing devices:

• the code they run (e.g. programs such as anti viruses)
• the management code (e.g. operating system related aspects), and the
• communication between computing devices (e.g. the network).

Dimitris mentioned the event log, so we’ll start with operating system related aspects (which falls under the category of host-based forensics). There are a number of operating-system specific events that are tied to time. I suspect that the reason the event log is often referred to, is that it is difficult to modify the event log. Each time the event log service is started and stopped (when a Windows machine is booted up and shutdown) an entry is made in the event log. Another time related event on a Windows system could be the System Restore functionality. If a restore point had been created during the time a suspect was planting evidence, you may find an additional restore point with time related events from the future.

Another aspect to examine would be the programs that are run on the system. This falls under the category of code forensics. For instance, if an anti-virus software were configured to run at boot up, examining the anti-virus log might show time related anomalies. Under the category of code forensics, you might also want to examine the file that has been purported to be “planted”. For example, Microsoft Word documents typically contain metadata such as the last time the file was opened, by whom, where the file has been, etc. If the suspect were to boot up the system with a bootable CD and place a document that way, examining the suspect file(s) may be one of the more fruitful options.

Don’t forget about the communication between computing devices, which would be the category of network-forensics. For example, if the system was booted up, did it grab an IP address via DHCP? If so, the DHCP server may have logs of this activity. Other common footprints of a system on a network include checking for system updates (e.g. updates server), Netbios announcements, synchronizing time with a time server, etc. This alone could be a good reason to pull logs from network devices during an incident, even if you are sure they network devices weren’t directly involved with the incident, since the log files could still contain useful information.

Now, I’m sure there are some folks who say that “well this can all be faked or modified”. Yes, that’s true, if it’s digital, at some point it can be overwritten. First, realize that I didn’t intend for this to be an all-inclusive checklist of things to look for. Second, especially if the system was booted using the host operating system, there are a myriad of actions that take place, and accounting for every single one of them can be a daunting task. I’m not saying that it’s impossible, just that it is likely to be very difficult to do so. In essence, it would come down to how well the suspect covered their tracks, and how thoroughly you would be able to examine the system.

At this point, I am interested in hearing from others, where might they look? For instance, the registry could hold useful information. Thoughts?

## Caught in the act…

I had lunch at a lounge today (I’m friends with the owners, and they have free WiFi) and when I went to pay my bill, had an interesting surprise. The waitress looked at her computer screen and said “Something’s happening”. Well she’s not the most computer literate, so I took a look at the screen and sure enough, someone was editing a text based configuration file (using DOS edit) that contained amongst other things, prices for the various meal items at the restaurant. A quick call to the owner confirmed that this WAS the company who has a service contract with the restaurant to maintain their restaurant software. Still, for a moment I was wondering if this was one of those rare few times you get to watch an attacker in action. Well not this time, perhaps next time.

## How digital forensics relates to computing

A lot of people are aware that there is some inherent connection between digital forensics and computing. I’m going to attempt to explain my understanding of how the two relate. However before we dive into digital forensics, we should clear up some misconceptions about what computing is (and perhaps what it is not).

Ask yourself the question “What is computing?” When I ask this question (or some related variant) to folks, I get different responses ranging from programming, to abstract definitions about Turing machines. Well, it turns out that in 1989 some folks over at the ACM came up with a definition that works out quite well. The final report titled “Computing as a Discipline” provides a short definition:

The discipline of computing is the systematic study of algorithmic processes that describe and transform information … The fundamental question underlying all of computing is, “What can be (efficiently) automated?”

This definition can be a bit abstract for those who aren’t already familiar with computing. In essence, the term “algorithmic processes” basically implies algorithms. That’s right, algorithms are at the heart of computing. A (well defined) algorithm is essentially a finite set of clearly articulated steps to accomplish some task. The task the algorithm is trying to accomplish, can vary. So computing is about algorithms whose tasks are to describe and transform information.

When we implement an algorithm on a computing device, we have a computer program. So a computer program is really just the implementation of some algorithm for a specific computing device. The computing device could be a physical machine (e.g. an Intel architecture) or an abstract model (e.g. Turing machine). When we implement an algorithm for a specific computing device, we’re really just translating the algorithm into a form the computing device can understand.  To help make this more concrete, take for example Microsoft Word. Microsoft Word is a computer program, it’s a slew of computer instructions encoded in a specific format. The computer instructions tell a computing device (e.g. the processor) what to do with information (the contents of the Word document). The computer instructions are a finite set of clearly articulated steps (described in a format the processor understands) to accomplish some task (editing the Word document).

There is one other concept to deal with before focusing on digital forensics, and that is how algorithms work with information. In order for an algorithm to transform and describe information, the information has to be encoded in some manner. For example, the letter “A” can be encoded (in ASCII) as the number 0x41 (65). The number 0x41 can then be represented in binary as 01000001. This binary number can then be encoded as the different positions of magnets and stored on a hard disk. Implicit in the structure of the algorithm, is how the algorithm decodes the representation of information. This means that given just raw the encoding of information (e.g. a stream of bits) we don’t know what information is represented, we still need to understand (to some degree) how the information is used by the algorithm. I blogged about this a bit in a previous post “Information Context (a.k.a. Code/Data Duality)“.

So how does this relate to digital forensics? Simple, digital forensics is the application of knowledge of various aspects of computing to answer legal questions. It’s been common (in practice) to extend the definition of digital forensics to answer certain types of non-legal questions (e.g. policy violations in a corporate setting).

Think for a moment about what we do in digital forensics:

• Collection of digital evidence: We collect the representation of information from computing devices.
• Preservation of digital evidence: We take steps to minimize the alteration of the information we collect from computing devices.
• Analysis of digital evidence: We apply our understanding of computer programs (algorithmic processes) to interpret and understand the information we collected. We then use our interpretation and understanding of the information to arrive at conclusions, using deductive and inductive logic.
• Reporting: We relate our findings of the analysis of information collected from computing devices, to others.

Metadata can also be explained in terms of computing. Looking back at the definition for the discipline of computing, realize there are two general categories of information:

• information that gets described and transformed by the algorithm
• auxiliary information used by the algorithm when the steps of the algorithm are carried out

The former (information that gets described and transformed) can be called “content”, while the latter (auxiliary information used by the algorithm when executed) can be called “metadata”. Viewed in this perspective, metadata is particular to a specific algorithm (computer program) and what is content to one algorithm could be metadata to another.Again, an example can help make this a bit clearer. Let’s go back to our Microsoft Word document. From the perspective of Microsoft Word, the content would be the text the user typed. The metadata would be the font information and attributes, revision history, etc. So, to Microsoft Word, the document contains both content and metadata. However, from the perspective of the file system, the Word document is the content, and things such as the location of the file, security attributes, etc. are all metadata. So what is considered by Microsoft Word to be metadata and content is just content to the file system.

Hopefully this helps explain what computing is, and how digital forensics relates.