The Single Piece of Evidence (SPoE) Myth

Often a crime-drama television show will have a “single piece of evidence”, which explains the entire crime, and is used to get a guilty conviction. In real life very rarely does this situation arise. Instead typical investigations will uncover many pieces of evidence that are used during trial. Some of the evidence found during an investigation will be more persuasive to a jury, some will be less persuasive. However, it’s uncommon (and perhaps foolish) for a prosecutor to proceed to court with a single piece of evidence. What is somewhat more common, is for a prosecutor to proceed to court with multiple pieces of evidence, with perhaps one or two that are likely to be very persuasive.

One topic where the SPoE myth is often used is anti-forensics. Simply, anti-forensics is anything that a suspect does to hinder a forensic examination. Many of the sources of information that are used during an investigation (e.g. file system time stamps) can be easily modified. When a new anti-forensic technique has been discovered, there is sometimes a tendency to see the technique as a “silver bullet” which can halt an entire investigation.

The truth is, a single action (e.g. logging in, compiling a program, reading email, etc.) can impact many different aspects of the operating system, especially on a Windows system. Compromising the integrity of a “single piece of evidence” (e.g. the last accessed file system time stamp) is rarely fatal. This is because there are typically a number of places to look to find evidence to support (or deny) some theory.  Removing one piece of evidence may make an argument weaker (or stronger), but rarely does it invalidate the entire argument.

The admissibility vs. weight of digital evidence

There is always a lot of conversation about when digital evidence is and is not admissible. Questions like “are proxy logs admissible?” and “what tools generate admissible evidence?” are focused on the concept of evidence admissibility. Some of the responses to these questions are correct, and some not really correct. I think the underlying issues (at least from what I’ve observed) with the incorrect answers stems from a confusion of two similar yet distinct legal concepts: evidence admissibility and the weight of evidence.

Caveats and Disclaimers

Before we begin this discussion, I want you to be aware of the following items:

  • I am not a lawyer
  • This is not legal advice
  • Always consult with your legal counsel for legal advice
  • The legal concepts discussed in this blog post are specific to the United States. Other jurisdictions are likely to have similar concepts.
  • Every court case (civil, criminal and otherwise) is decided on a case-by-case basis. This means what is true for one case may not be true for another.

Essentially, evidence admissibility refers to the requirements for evidence to be entered into a court case. The weight of evidence however refers to how likely the evidence is to persuade a person (e.g. judge or jury) towards (or against) a given theory.

In the legal system, before evidence can be presented for persuasive use, it must be admitted by the court. If one side or the other raises an objection to the evidence being admitted, a judge will typically listen to arguments from both sides, and come to a decision about whether or not to admit the evidence. The judge will likely consider things like admissibility requirements (listed below), prejudicial effects, etc.

When it comes to court (and I’m going to focus on criminal court) the rules for what is and what is not admissible vary. There are however three common elements:

  1. Authenticity
  2. Relevancy
  3. Reliability

Briefly, authenticity refers to whether or not the evidence is authentic, or “is what it is purported to be.” For example, is the hard drive being entered into evidence as the “suspect drive” actually the drive that was seized from the suspect system? Relevancy refers to whether or not the evidence relates to some issue at hand. Finally, reliability refers to whether or not the evidence meets some “minimum standard of trustworthiness”. Reliability is where concepts such as Daubert/Frye, repeatable and consistent methodology, etc. are used. The oft quoted “beyond a reasonable doubt” is used as a bar for determining guilt or innocence, not evidence admissibility.

These requirements apply equally well to all types of evidence, including digital evidence. In fact, there are no extra “hoops” that digital evidence has to cross through for admissibility purposes. You’ll also notice things like chain of custody, MD5 hashes, etc. aren’t on the list. For a simple reason, they aren’t strict legal requirements for evidence admissibility purposes. Devices such as a chain of custody, MD5 hashes, etc. are common examples of how to help meet various admissibility requirements, or how to help strengthen the weight of the evidence, but in and of themselves are not strictly required by legal statute.

There are “myths” surrounding evidence admissibility that are common to digital forensics. I’ll focus on the two most common (that I’ve seen):

  1. Digital evidence is easy to modify and can’t be used in court
  2. Only certain types of tools generate admissible evidence

The first myth focuses around the idea that digital evidence is often easy to modify (either accidentally or intentionally.) This really focuses on the reliability requirement of evidence admissibility. The short answer is that digital evidence is admissible. In fact, unless there is specific support to a claim of alteration (e.g. discrepancies in a log file) the opposing side can not even raise this possibility (at least for admissibility purposes.) Even if there are discrepancies, the evidence is likely to still be admitted, with the discrepancies going towards the weight of the evidence rather than admissibility. The exception to this might be if the discrepancies/alterations were so egregious as to undermine a “minimum standard of trustworthiness.”

The second myth is commonly found in the form of the question “What tools are accepted by the courts?” I think a fair number of people really mean “What tools generate results that are admissible in court?” Realize that in this case, “results” would be considered evidence. This scenario is somewhat analogous to a criminalist photographing a physical crime scene and asking the question “What cameras are accepted by the courts?” As long as the camera records an accurate representation of the subject of the photograph, the results should be admissible. This would be some “minimum standard of trustworthiness”. To contrast this to weight, realize that different cameras record photographs differently. A 3 megapixel camera will have different results than a 1 megapixel camera. An attorney could argue about issues surrounding resolution, different algorithms, etc. but this would all go to the weight (persuasive factor) of the evidence, not the admissibility.

Hopefully this clarifies some of the confusion surrounding evidence admissibility. I’d love to hear other people’s comments and thoughts about this, including any additional questions.

Recovering a FAT filesystem directory entry in five phases

This is the last in a series of posts about five phases that digital forensics tools go through to recover data structures (digital evidence) from a stream of bytes. The first post covered fundamental concepts of data structures, as well as a high level overview of the phases. The second post examined each phase in more depth. This post applies the five phases to recovering a directory entry from a FAT file system.

The directory entry we’ll be recovering is from the Honeynet Scan of the Month #24. You can download the file by visiting the SOTM24 page. The entry we’ll recover is the 3rd directory entry in the root directory (the short name entry for _IMMYJ~1.DOC, istat number 5.)


The first step is to locate the entry. It’s at byte offset 0x2640 (9792 decimal). How do we know this? Well assuming we know we want the third entry in the root directory, we can calculate the offset using values from the boot sector, as well as the fact that each directory entry is 0x20 (32 decimal) bytes long (this piece of information came from the FAT file system specification.) There is an implicit step that we skipped, recovering the boot sector (so we could use the values). To keep this post to a (semi) reasonable length, we’ll skip this step. It is fairly straightforward though. The calculation to locate the third entry in the root directory of the image file is:

3rd entry in root directory = (bytes per sector) * [(length of reserved area) + [(number of FATs) * (size of one FAT)]] + (offset of 3rd directory entry)

bytes per sector = 0x200 (512 decimal)

length of reserved area = 1 sector

number of FATs = 2

size of one FAT = 9 sectors

size of one directory entry = 0x20 (32 decimal) bytes

offset of 3rd directory entry = size of one directory entry *2 (start at 0 since it’s an offset)

3rd entry in root directory = 0x200 * (1 + (2 * 9))+ (0x20 * 2) = 0x2640 (9792 decimal)

Using xxd, we can see the hex dump for the 3rd directory entry:

$ xxd -g 1 -u -l 0x20 -s 0x2640 image
0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..


Continuing to the extraction phase, we need to extract each field. For a short name directory entry, there are roughly 12 fields (depending on whether you consider the first character of the file name as it’s own field.) The multibyte fields are stored in little endian, so we’ll need to reverse the bytes that we see in the output from xxd.

To start, the first field we’ll consider is the name of the file. This starts at offset 0 (relative to the start of the data structure) and is 11 bytes long. It’s the ASCII representation of the name.

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
File name = _IMMYJ~1.DOC (_ represents the byte 0xE5)

The next field is the attributes field, which is at offset 12 and 1 byte long. It’s an integer and a bit field, so we’ll examine it further in the decoding phase.

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Attributes = 0x20

Continuing in this manner, we can extract the rest of the fields:

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Reserved = 0x00

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation time (hundredths of a second) = 0x68

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation time = 0x4638

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation date = 0x2D2B

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Access date = 0x2D2B

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
High word of first cluster = 0x0000

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Modification time = 0x754F

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Modification date = 0x2C8F

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Low word of first cluster = 0x0002

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Size of file = 0x00005000 (bytes)


With the various fields extracted, we can decode the various bit-fields. Specifically the attributes, dates, and times fields. The attributes field is a single byte, with the following bits used to represent the various attributes:

  • Bit 0: Read only
  • Bit 1: Hidden
  • Bit 2: System
  • Bit 3: Volume label
  • Bit 4: Directory
  • Bit 5: Archive
  • Bits 6 and 7: Unused
  • Bits 0, 1, 2, 3: Long name

When decoding the fields in a FAT file system, the right most bit is considered bit 0. To specify a long name entry, bits 0, 1, 2, and 3 would be set. The value we extracted from the example was 0x20 or 0010 0000 in binary. The bit at offset 5 (starting from the right) is set, and represents the “Archive” attribute.

Date fields for a FAT directory entry are encoded in two byte values, and groups of bits are used to represent the various sub-fields. The layout for all date fields (modification, access, and creation) is:

  • Bits 0-4: Day
  • Bits 5-8: Month
  • Bits 9-15: Year

Using this knowledge, we can decode the creation date. The value we extracted was 0x2D2B which is 0010 1101 0010 1011 in binary. The day, month, and year fields are thus decoded as:

0010 1101 0010 1011
Creation day: 01011 binary = 0xB = 11 decimal

0010 1101 0010 1011
Creation month: 1001 binary = 0x9 = 9 decimal

0010 1101 0010 1011
Creation year: 0010110 binary = 0x16 = 22 decimal

A similar process can be applied to the access and modification dates. The value we extracted for the access date was also 0x2D2B, and consequently the access day, month, and year values are identical to the respective fields for the creation date. The value we extracted for the modification date was 0x2C8F (0010 1100 1000 1111 in binary). The decoded day, month, and year fields are:

0010 1100 1000 1111
Modification day: 01111 binary = 0xF = 15 decimal

0010 1100 1000 1111
Modification month: 0100 binary = 0x4 = 4 decimal

0010 1100 1000 1111
Modification year: 0010110 binary = 0x16 = 22 decimal

You might have noticed the year values seem somewhat small (i.e. 22). This is because the value for the year field is an offset starting from the year 1980. This means that in order to properly interpret the year field, the value 1980 (0x7BC) needs to be added to the value of the year field. This is done during the next phase (interpretation).

The time fields in a directory entry, similar to the date fields, are encoded in two byte values, with groups of bits used to represent the various sub-fields. The layout to decode a time field is:

  • Bits 0-4: Seconds
  • Bits 5-10: Minutes
  • Bits 11-15: Hours

Recall that we extracted the value 0x4638 (0100 0110 0011 1000 in binary) for the creation time. Thus the decoded seconds, minutes, and hours fields are:

0100 0110 0011 1000
Creation seconds = 11000 binary = 0x18 = 24 decimal

0100 0110 0011 1000
Creation minutes = 110001 binary = 0x31 = 49 decimal

0100 0110 0011 1000
Creation hours = 01000 binary = 0x8 = 8 decimal

The last value we need to decode is the modification time. The bit-field layout is the same for the creation time. The value we extracted for the modification time was 0x754F (0111 0101 0100 1111 in binary). The decoded seconds, minutes, and hours fields for the modification time are:

0111 0101 0100 1111
Modification seconds = 01111 binary = 0xF = 15 decimal

0111 0101 0100 1111
Modification minutes = 101010 binary = 0x2A = 42 decimal

0111 0101 0100 1111
Modification hours = 01110 binary = 0xE = 14 decimal


Now that we’ve finished extracting and decoding the various fields, we can move into the interpretation phase. The values for the years and seconds fields need to be interpreted. The value of the years field is the offset from 1980 (0x7BC) and the seconds field is the number of seconds divided by two. Consequently, we’ll need to add 0x7BC to each year field and multiply each second field by two. The newly calculated years and seconds fields are:

  • Creation year = 22 + 1980 = 2002
  • Access year = 22 + 1980 = 2002
  • Modification year = 22 + 1980 = 2002
  • Creation seconds = 24 * 2 = 48
  • Modification seconds = 15 * 2 = 30

We also need to calculate the first cluster of the file, which simply requires concatenating the high and the low words. Since the high word is 0x0000, the value for the first cluster of the file is the value of the low word (0x0002).

In the next phase (reconstruction) we’ll use Python, so there are a few additional values that are useful to calculate. The first order of business is to account for the hundredths of a second associated with the seconds field for creation time. The value we extracted for the hundredths of a second for creation time was 0x68 (104 decimal). Since this value is greater than 100 we can add 1 to the seconds field of creation time. Our new creation seconds field is:

  • Creation seconds = 48 + 1 = 49

This still leaves four hundredths of a second left over. Since we’ll be reconstructing this in Python, we’ll use the Python time class which accepts values for hours, minutes, seconds, and microseconds. To convert the remaining four hundredths of a second to microseconds multiply by 10000. The value for creation microseconds is:

  • Creation microseconds = 4 * 10000 = 40000

The other calculation is to convert the attributes field into a string. This is purely arbitrary, and is being done for display purposes. So our new attributes value is:

  • Attributes = “Archive”


This is the final phase of recovering our directory entry. To keep things simple, we’ll reconstruct the data structure as a Python dictionary. Most applications would likely use a Python object, and doing so is a fairly straight forward translation. Here is a snippet of Python code to create a dictionary with the extracted, decoded, and interpreted values (don’t type the >>> or …):

$ python
>>> from datetime import date, time
>>> dirEntry = dict()
>>> dirEntry["File Name"] = "xE5IMMYJ~1DOC"
>>> dirEntry["Attributes"] = "Archive"
>>> dirEntry["Reserved Byte"] = 0x00
>>> dirEntry["Creation Time"] = time(8, 49, 49, 40000)
>>> dirEntry["Creation Date"] = date(2002, 9, 11)
>>> dirEntry["Access Date"] = date(2002, 9, 11)
>>> dirEntry["First Cluster"] = 2
>>> dirEntry["Modification Time"] = time(14, 42, 30)
>>> dirEntry["Modification Date"] = date(2002, 4, 15)
>>> dirEntry["size"] = 0x5000

If you wanted to print out the values in a (semi) formatted fashion you could use the following Python code:

>>> for key in dirEntry.keys():
... print "%s == %s" % (key, str(dirEntry[key]))

And you would get the following output

Modification Date == 2002-04-15
Creation Date == 2002-09-11
First Cluster == 2
File Name == ?IMMYJ~1DOC
Creation Time == 08:49:49.040000
Access Date == 2002-09-11
Reserved Byte == 0
Modification Time == 14:42:30
Attributes == Archive
size == 20480

At this point, there are a few additional fields that could have been calculated. For instance, the file name could have been broken into the respective 8.3 (base and extension) components. It might also be useful to calculate the allocation status of the associated file (in this case it would be unallocated). These are left as exercises for the reader ;).

This concludes the 3-post series on recovering data structures from a stream of bytes. Hopefully the example helped clarify the roles and activities of each of the five phases. Realize that the five phases aren’t specific to recovering file system data structures, they apply to network traffic, code, file formats, etc.

The five phases of recovering digital evidence

This is the second post in a series about the five phases of recovering data structures from a stream of bytes (a form of digital evidence recovery). In the last post we discussed what data structures were, how they related to digital forensics, and a high level overview of the five phases of recovery. In this post we’ll examine each of the five phases in finer grained detail.

In the previous post, we defined five phases a tool (or human if they’re that unlucky) goes through to recover data structures. They are:

  1. Location
  2. Extraction
  3. Decoding
  4. Interpretation
  5. Reconstruction

We’ll now examine each phase in more detail…


The first step in recovering a data structure from a stream of bytes is to locate the data structure (or at least the fields of the data structure we’re interested in.) Currently, there are 3 different commonly used methods for location:

  1. Fixed offset
  2. Calculation
  3. Iteration

The first method is useful when the data structure is at a fixed location relative to a defined starting point. This is the case for a FAT boot sector, which is located in the first 512 bytes of a partition. The second method (calculation) uses values from one or more other fields (possibly in other data structures) to calculate the location of a data structure (or field). The last method (iteration) examines “chunks” of data, and attempts to identify if the chunks are “valid”, meaning the (eventual) interpretation of the chunk fits predetermined rules for a given data structure.

These three methods aren’t mutually exclusive, meaning they can be combined and intermixed. It might be the case that locating a data structure requires all three methods. For example the ils tool from Sleuthkit, when run against a FAT file system ils first recovers the boot sector, then calculates the start of the data region, and finally iterates over chunks of data in the data region, attempting to validate the chunks as directory entries.

While all three methods require some form of a priori knowledge, the third method (iteration) isn’t necessarily dependent on knowing the fixed offset of a data structure. From a purist perspective, iteration itself really is location. Iteration yields a set of possible locations, as opposed to the first two methods which yield a single location. The validation aspect of iteration is really a combination of the rest of the phases (extraction, decoding, interpretation and reconstruction) combined with post recovery analysis.

Another method for location, that is less common than the previous three is location by outside knowledge from some source. This could be a person who has already performed location, or it could be the source that created the data structure (e.g. the operating system). Due to the flexible and dynamic nature of computing devices, this isn’t commonly used, but it is a possible method.


Once a data structure (or the relevant fields) have been located, the next step is to extract the fields of the data structure out of the stream of bytes. Realize that the “extracting” is really the application of type information. The information from the stream is the same, but we’re using more information about how to access (and possibly manipulate) the value of the field(s). For example the string of bytes 0x64 and 0x53 can be extracted as an ASCII string composed of the characters “dS”, or it could be the (big endian) value 0x6453 (25683 decimal). The information remains the same, but how we access and manipulate the values (e.g. concatenation vs. addition) differs. Knowledge of the type of field provides the context for how to access and manipulate the value, which is used during later phases of decoding, interpretation, and reconstruction.

The extraction of a field that is composed of multiple bytes also requires knowledge of the order of the bytes, commonly referred to as the “endianess”, “byte ordering”, or “big vs. little endian”. Take for instance the 16-bit hexadecimal number 0x6453. Since this is a 16-bit number, we would need two bytes (assuming 8-bit bytes) to store this number. So the value 0x6453 is composed of the (two) bytes 0x64 and 0x53

It’s logical to think that these two bytes would be adjacent to each other in the stream of bytes, and they typically are. The question is now what is the order of the two bytes in the stream?

0x64, 0x53 (big endian)


0x53, 0x64 (little endian)

Obviously the order matters.


After the relevant fields of a data structure have been located and extracted, it’s still possible further extraction is necessary, specifically for bit fields (e.g. flags, attributes, etc.) The difference between this phase and the extraction phase is that extraction extracts information from a stream of bytes and decoding extracts information from the extraction phase. Alternatively, the output from the extraction phase is used as the input to the decoding phase. Both phases however focus on extracting information. Computation using extracted information is reserved for later phases (interpretation and reconstruction).

Another reason to distinguish this phase from extraction is that most (if not all) computing devices can only read (at least) whole bytes, not individual bits. While a human with a hex dump could potentially extract a single bit, software would need to read (at least) a whole byte and extract the various bits within the byte(s).

There isn’t much that happens at this phase, as much of the activity focuses around accessing various bits.


The interpretation phase takes the output of the decoding phase (or the extraction phase if the decoding phase uses only identity functions) and performs various computations using the information. While extraction and decoding focus on extracting and decoding values, interpretation focuses on computation using the extracted (and decoded) values.

Two examples of interpretation are unit conversion, and the calculation of values used during the reconstruction phase. An example of unit conversion would be converting the seconds field of a FAT time stamp from it’s native format (seconds/2) to a more logical format (seconds). A useful computation for reconstruction might be to calculate the size of the FAT region (in bytes) for a FAT file system (bytes per sector * size of one FAT structure (in sectors) * number of FAT structures.)

Since this phase is used heavily by the reconstruction phase, it’s not uncommon to see this phase embodied in the code for reconstruction. However this phase is still a logically separate component.


This is the last phase of recovering digital evidence. Information from previous phases is used to reconstruct a usable representation of the data structure (or at least the relevant fields.) Possible usable representations include:

  • A language specific construct or class (e.g. Python date object or a C integer)
  • Printable text (e.g. output from fsstat)
  • A file (e.g. file carving)

The idea is that the output from this phase can be used for some further analysis (e.g. timeline generation, analyzing email headers, etc.) Some tools might also perform some error checking during reconstruction, failing if the entire data structure is unable to be properly recovered. While this might be useful in some automated scenarios, it has the downside of potentially missing useful information when only part of the structure is available or is valid.

At this point, we’ve gone into more detail of each phase and hopefully explained in enough depth the purpose and types of activity that happen in each. The next (and last) post in this series is an example of applying the five phases to recovering a short name directory entry from a FAT file system.

How forensic tools recover digital evidence (data structures)

In a previous post I covered “The basics of how digital forensics tools work.” In that post, I mentioned that one of the steps an analysis tool has to do is to translate a stream of bytes into usable structures. This is the first in a series of three posts that examines this step (translating from a stream of bytes to usable structures) in more detail. In this post I’ll introduce the different phases that a tool (or human if they’re that unlucky) goes through when recovering digital evidence. The second post will go into more detail about each phase. Finally, the third post will show an example of translating a series of bytes into a usable data structure for a FAT file system directory entry.

Data Structures, Data Organization, and Digital Evidence

Data structures are central to computer science, and consequently bear importance to digital forensics. In The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd Edition), Donald Knuth provides the following definition for a data structure:

Data Structure: A table of data including structural relationships

In this sense, a “table of data” refers to how a data structure is composed. This definition does not imply that arrays are the only data structure (which would exclude other structures such as linked lists.) The units of information that compose a data structure are often referred to as fields. That is to say, a data structure is composed of one or more fields, where each field contains information, and the fields are adjacent (next) to each other in memory (RAM, hard disk, usb drive, etc.)

The information the fields contain falls into one of two categories, the data a user wishes to represent (e.g. the contents of a file), as well as the structural relationships (e.g. a pointer to the next item in a linked list.) It’s useful to think of the former (data) as data, and the latter (structural relationships) as metadata. Although the line between the two is not always clear, and depends on the context of interpretation. What may be considered data from one perspective, may be considered metadata from another perspective. An example of this would be a Microsoft Word document, which from a file system perspective is data. However, from the perspective of Microsoft Word, the file contains both data (the text) as well as metadata (the formatting, revision history, etc.)

The design of a data structure not only includes the order of the fields, but also the higher level design goals for the programs which access and manipulate the data structures. For instance, efficiency has long been a desirable aspect of many computer programs. With society’s increased dependence on computers, other higher level design goals such as security, multiple access, etc. have also become desirable. As a result, many data structures contain fields to accommodate these goals.

Another important aspect in computing is how to access and manipulate the data structures and their related fields. Knuth defines this under the term “data organization”:

Data Organization: A way to represent information in a data structure, together with algorithms that access and/or modify the structure.

An example of this would be a field that contains the bytes 0x68, 0x65, 0x6C, 0x6C, and 0x6F. One way to interpret these bytes is as the ASCII string “hello”. In another interpretation, these bytes can be the integer number 448378203247 (decimal). Which one is it? Well there are scenarios where either could be correct. To answer the question of correct interpretation requires information beyond just the data structure and field layout, hence the term data organization. Even with self-describing data structures, information about how to access and manipulate the “self-describing” parts (e.g. type “1” means this is a string) is still needed.

So where does all this information for data organization (and data structures) come from? There are a few common sources. Perhaps the first would be a document from the organization that designed the data structures and the software that accesses and manipulates them. This could be either a formal specification, or one or more informal documents (e.g. entries in a knowledge base.) Another source would be reverse engineering the code that accesses and manipulates the data structures.

If you’ve read through all of this, you’re might be asking “So how does this relate to digital forensics?” The idea is that data structures are a type of digital evidence. Realize that the term “digital evidence” is somewhat overloaded. In one context, a disk image is digital evidence (i.e. what was collected during evidence acquisition), and in another context, an email extracted from a disk image is digital evidence. This series focuses on the latter, digital evidence extracted from a stream of bytes. Typically this would occur during the analysis phase, although (especially with activities such as verification) this can occur prior to the evidence acquisition phase.

The 5 Phases

Now that we’ve talked about what data structures are and how they relate to digital forensics, lets see how to put this to use with our forensic tools. What we’re about to do is describe five abstract phases, meaning all tools may not implement them directly, and some tools don’t focus on all five phases. These phases can also serve as a methodology for recovering data structures, should you happen to be in the business of writing digital forensic tools.

  1. Location
  2. Extraction
  3. Decoding
  4. Interpretation
  5. Reconstruction

The results of each phase are used as input for the next phase, in a linear fashion.

An example will help clarify each phase. Consider the recovery of a FAT directory entry from a disk image. The first task would be to locate the desired directory entry, which could be accomplished through different mechanisms such as calculation or iteration. The next task is to extract out the various fields of the data structure, such as the name, the date and time stamps, the attributes, etc. After the fields have been extracted, fields where individual bits represent sub fields can be decoded. In the example of the directory entry, this would be the attributes field, which denotes if a file is considered hidden, to be archived, a directory, etc. Once all of the fields have been extracted and decoded, they can be interpreted. For instance, the seconds field of a FAT time stamp is really the seconds divided by two, so the value must be multiplied by two. Finally, the data structure can be reconstructed using the facilities of the language of your choice, such as the time class in Python.

There are a few interesting points to note with recovery of data structures using the above methodology. First, not all tools go through all phases, at least not directly. For instance, file carving doesn’t directly care about data structures. Depending on how you look at it, file carving really does go through all five phases, it just uses an identify function. In addition, file carving does care about (parts of) data structures, it cares about the fields of the data structures that contain “user information”, not about the rest of the fields. In fact, much file carving is done with a built-in assumption about the data structure: that the fields that contain “user information” are stored in contiguous locations.

Another interesting point is the distinction between extraction, decoding, and interpretation. Briefly, extraction and decoding focus on extracting information (from stream of bytes and already extracted bytes respectively), whereas interpretation focuses on computation using extracted and decoded information. The next post will go into these distinctions in more depth.

A third and subtler point comes from the transition of data structures between different types of memory, notably from RAM to a secondary storage device such as a hard disk or USB thumb drive. Not all structural information may make the transition from RAM, and as a result is lost. For instance, a linked list data structure, which typically contains a pointer field to the next element in the list, may not record the pointer field when being written to disk. More often that not, such information isn’t necessary to read consistent data structures from disk, otherwise the data organization mechanism wouldn’t really be consistent and reliable. However, if an analysis scenario does require such information (it’s theoretically possible), the data structures would have to come directly from RAM, as opposed to after they’ve been written to disk. This problem doesn’t stem from the five phases, but instead stems from a loss of information during the transition from RAM to disk.

In the next post, we’ll cover each phase in more depth, and examine some of the different activities that can occur at each phase.

Copying 1s and 0s

I’ve been asked a few times over the past weeks about making multiple copies of disk images. Specifically, if I were to make a copy of a copy of a disk image, would the “quality” degrade? The short answer is no. It boils down to the idea of copying information from a digital format (as opposed to an analog format). Let’s say I write down the following string of ones and zeros on a 3×5 card:


Now, if I take out another 3×5 card and copy over (by hand) the same string of ones and zeros, I now have two cards with the string:

Finally, if I took out a third 3×5 card and copied yet again (by hand) the same string (from the second card) I would have three cards with the string:


Assuming I had good handwriting, then each copy would be legible and have the same string of ones and zeros. I could continue on indefinitely in this manner, and each time the new 3×5 card would not suffer in “quality”.

However, instead of copying (by hand) the string of ones and zeros, I could have photocopied the first 3×5 card. This would yield a result of one 3×5 card, and a (slightly) degraded copy. If I then photocopied the copy, I would get a further degraded (third) copy.

So, copying images (or any digital information) verbatim (i.e. using a lossless transformation process) doesn’t degrade the quality of the information, read a “one” (from the source) write a “one” (to the destination). Where you might run into trouble is if the source or destination media has (physical) errors. So it’s always a good idea to verify your media before imaging. It’s also a good idea to use a tool that tells you (and even better if the tool logs) errors that it encounters during the imaging process.

Planting evidence

The other day, Dimitris left a comment asking about how to determine if someone has altered the BIOS clock and placed a new file on the file system. In essence, this is “planting evidence”.

So, what might the side effects of this type of activity be? It’s difficult (if not impossible) to give an exact answer that will fit every time. Especially since there are different ways of planting evidence in this scenario. For example, the suspect could:

  1. Power down the system
  2. Change the BIOS clock
  3. Power the system back up
  4. Place the file on the system
  5. Power the system back down
  6. Fix the BIOS clock

Alternatively, the suspect could also:

  1. Yank the power cord
  2. Update the BIOS clock
  3. Boot up with a bootable CD
  4. Place the file on the system
  5. Yank the power cord (again)
  6. Fix the BIOS clock

It’s also possible to change the clock while the power is running. Again, it’s a different scenario with different possible side effects. Really, this needs to be evaluated on a case-by-case basis. However, the fundamental idea is to look for evidence of events that are somehow (directly or indirectly) tied to time, and see if you can find inconsistencies (a “glitch in the matrix” if you will.) The evidence can be both physical as well as digital. For example, if the “victim” were having lunch with the CEO and CIO during the time which the document was supposedly placed on the system, then the “victim” likely has a good alibi.

For digital evidence, there are 3 basic aspects of computing devices:

  • the code they run (e.g. programs such as anti viruses)
  • the management code (e.g. operating system related aspects), and the
  • communication between computing devices (e.g. the network).

Dimitris mentioned the event log, so we’ll start with operating system related aspects (which falls under the category of host-based forensics). There are a number of operating-system specific events that are tied to time. I suspect that the reason the event log is often referred to, is that it is difficult to modify the event log. Each time the event log service is started and stopped (when a Windows machine is booted up and shutdown) an entry is made in the event log. Another time related event on a Windows system could be the System Restore functionality. If a restore point had been created during the time a suspect was planting evidence, you may find an additional restore point with time related events from the future.

Another aspect to examine would be the programs that are run on the system. This falls under the category of code forensics. For instance, if an anti-virus software were configured to run at boot up, examining the anti-virus log might show time related anomalies. Under the category of code forensics, you might also want to examine the file that has been purported to be “planted”. For example, Microsoft Word documents typically contain metadata such as the last time the file was opened, by whom, where the file has been, etc. If the suspect were to boot up the system with a bootable CD and place a document that way, examining the suspect file(s) may be one of the more fruitful options.

Don’t forget about the communication between computing devices, which would be the category of network-forensics. For example, if the system was booted up, did it grab an IP address via DHCP? If so, the DHCP server may have logs of this activity. Other common footprints of a system on a network include checking for system updates (e.g. updates server), Netbios announcements, synchronizing time with a time server, etc. This alone could be a good reason to pull logs from network devices during an incident, even if you are sure they network devices weren’t directly involved with the incident, since the log files could still contain useful information.

Now, I’m sure there are some folks who say that “well this can all be faked or modified”. Yes, that’s true, if it’s digital, at some point it can be overwritten. First, realize that I didn’t intend for this to be an all-inclusive checklist of things to look for. Second, especially if the system was booted using the host operating system, there are a myriad of actions that take place, and accounting for every single one of them can be a daunting task. I’m not saying that it’s impossible, just that it is likely to be very difficult to do so. In essence, it would come down to how well the suspect covered their tracks, and how thoroughly you would be able to examine the system.

At this point, I am interested in hearing from others, where might they look? For instance, the registry could hold useful information. Thoughts?

How digital forensics relates to computing

A lot of people are aware that there is some inherent connection between digital forensics and computing. I’m going to attempt to explain my understanding of how the two relate. However before we dive into digital forensics, we should clear up some misconceptions about what computing is (and perhaps what it is not).

Ask yourself the question “What is computing?” When I ask this question (or some related variant) to folks, I get different responses ranging from programming, to abstract definitions about Turing machines. Well, it turns out that in 1989 some folks over at the ACM came up with a definition that works out quite well. The final report titled “Computing as a Discipline” provides a short definition:

The discipline of computing is the systematic study of algorithmic processes that describe and transform information … The fundamental question underlying all of computing is, “What can be (efficiently) automated?”

This definition can be a bit abstract for those who aren’t already familiar with computing. In essence, the term “algorithmic processes” basically implies algorithms. That’s right, algorithms are at the heart of computing. A (well defined) algorithm is essentially a finite set of clearly articulated steps to accomplish some task. The task the algorithm is trying to accomplish, can vary. So computing is about algorithms whose tasks are to describe and transform information.

When we implement an algorithm on a computing device, we have a computer program. So a computer program is really just the implementation of some algorithm for a specific computing device. The computing device could be a physical machine (e.g. an Intel architecture) or an abstract model (e.g. Turing machine). When we implement an algorithm for a specific computing device, we’re really just translating the algorithm into a form the computing device can understand.  To help make this more concrete, take for example Microsoft Word. Microsoft Word is a computer program, it’s a slew of computer instructions encoded in a specific format. The computer instructions tell a computing device (e.g. the processor) what to do with information (the contents of the Word document). The computer instructions are a finite set of clearly articulated steps (described in a format the processor understands) to accomplish some task (editing the Word document).

There is one other concept to deal with before focusing on digital forensics, and that is how algorithms work with information. In order for an algorithm to transform and describe information, the information has to be encoded in some manner. For example, the letter “A” can be encoded (in ASCII) as the number 0x41 (65). The number 0x41 can then be represented in binary as 01000001. This binary number can then be encoded as the different positions of magnets and stored on a hard disk. Implicit in the structure of the algorithm, is how the algorithm decodes the representation of information. This means that given just raw the encoding of information (e.g. a stream of bits) we don’t know what information is represented, we still need to understand (to some degree) how the information is used by the algorithm. I blogged about this a bit in a previous post “Information Context (a.k.a. Code/Data Duality)“.

So how does this relate to digital forensics? Simple, digital forensics is the application of knowledge of various aspects of computing to answer legal questions. It’s been common (in practice) to extend the definition of digital forensics to answer certain types of non-legal questions (e.g. policy violations in a corporate setting).

Think for a moment about what we do in digital forensics:

  • Collection of digital evidence: We collect the representation of information from computing devices.
  • Preservation of digital evidence: We take steps to minimize the alteration of the information we collect from computing devices.
  • Analysis of digital evidence: We apply our understanding of computer programs (algorithmic processes) to interpret and understand the information we collected. We then use our interpretation and understanding of the information to arrive at conclusions, using deductive and inductive logic.
  • Reporting: We relate our findings of the analysis of information collected from computing devices, to others.

Metadata can also be explained in terms of computing. Looking back at the definition for the discipline of computing, realize there are two general categories of information:

  • information that gets described and transformed by the algorithm
  • auxiliary information used by the algorithm when the steps of the algorithm are carried out

The former (information that gets described and transformed) can be called “content”, while the latter (auxiliary information used by the algorithm when executed) can be called “metadata”. Viewed in this perspective, metadata is particular to a specific algorithm (computer program) and what is content to one algorithm could be metadata to another.Again, an example can help make this a bit clearer. Let’s go back to our Microsoft Word document. From the perspective of Microsoft Word, the content would be the text the user typed. The metadata would be the font information and attributes, revision history, etc. So, to Microsoft Word, the document contains both content and metadata. However, from the perspective of the file system, the Word document is the content, and things such as the location of the file, security attributes, etc. are all metadata. So what is considered by Microsoft Word to be metadata and content is just content to the file system.

Hopefully this helps explain what computing is, and how digital forensics relates.

The basics of how programs are compiled and executed

Well, the post “The basics of how digital forensics tools work” seemed to be fairly popular, even getting a place on Digg. This post is focused on the basics of how a program gets compiled and loaded into memory when the program is executed. It’s useful for code analysis (reverse engineering), and is aimed at those who aren’t already familiar with compilers. The basic reasoning is that if you understand (at least from a high level) how compilers work, it will help when analyzing compiled programs. So, if you’re familiar with code analysis, this post isn’t really aimed at you. If however, you’re new to the field of reverse engineering, (specifically code analysis) this post is aimed at you.

Compiling is program transformation

From an abstract perspective, compiling a program is really just transforming the program from one language into another. The “original” language that the program is written in is commonly called the source language (e.g. C, C++, Python, Java, Pascal, etc.) The program as it is written in the source language is called the source code. The “destination” language, the language that the program is written to, is commonly called the target language. So compiling a program is essentially translating it from the source language to the target language.

The “typical” notion of compiling a program is transforming the program from a higher level language (e.g. C, C++, Visual Basic, etc.) to an executable file (PE, ELF, etc.). In this case the source language is the higher level language, and the target language is machine code (the byte code representation of assembly). Realize however, that going from source code to executable file is more than just transforming source code into machine code. When you run an executable file, the operating system needs to set up an environment (process space) for the code (contained in the executable file) to run inside of. For instance, the operating system needs to know what external libraries will be used, what parts of memory should be marked executable (i.e. can contain directly executable machine code), as well as where in memory to start executing code. Two additional types of programs, linkers and loaders accomplish these tasks.

Compiler front and back ends.

Typically a compiler is composed of two primary components, the front end and the back end. The front end of a compiler typically takes in the source code, analyzes the structure (doing things such as checking for errors) and creates an intermediate representation of the source code (suitable for the back end). The back end takes the output of the front end (the intermediate representation), optionally performs optimization, and translates the intermediate representation of the source code, into the target language. In the case of compiling a program, it is common for a compiler’s back end to generate human readable assembly code (mnemonics) and then invoke an assembler to translate the assembly code into it’s byte code representation, which is suitable for a processor.

Realize, it’s not an “absolute” requirement that a compiler be divided into front and back ends. It is certainly possible to create a compiler that translates directly from source code to target language. There are however benefits to the front/back end division, such as reuse, ease of development, etc.

Linkers and Loaders

The compiler took our source code and translated it to executable machine instructions, but we don’t yet have an executable file, just one or more files that contain executable code and data. These files are typically called object code, and in many instances aren’t suitable to stand on their own. There are (at least) 3 high level tasks that still need to be performed:

  1. We still need to handle referencing dependencies, such as variables and functions (possibly in external code libraries.) This is called symbol resolution.
  2. We still need to arrange the object code into a single file, making sure separate pieces do not overlap, and adjusting code (as necessary) to reflect the new locations. This is called relocation.
  3. When we execute the program, we still need to set up an environment for it to run in, as well as load the code from disk into RAM. This is called program loading.

Conceptually, the line between linkers and loaders tends to blur near the middle. Linkers tend to focus on the first item (symbol resolution) while loaders tend to focus on the third item (program loading). The second item (relocation) can be handled by either a linker or loader, or even both. While linkers and loaders are often separate programs, there do exist single linker-loader programs which combine the functionality.


The primary job of a linker is symbol resolution, that is to resolve references to entities in the code. For example, a linker might be responsible for replacing references to the variable X with a memory address. The output from a linker (at compile time) typically includes an executable file, a map of the different components in the executable file (which facilitates future linking and loading), and (optionally) debugging information. The different components that a linker generates don’t always have to be separate files, they could all be contained in different parts of the executable file.

Sometimes you’ll hear references to statically or dynamically linked programs. Both of these refer to how different pieces of object code are linked together at compile time (i.e. prior to the program loading.) Statically linked programs contain all of the object code they need in the executable file. Dynamically linked programs don’t contain all of the object code they need, instead they contain enough information so that at a later time, the needed object code (and symbols) can be found and made accessible. Since statically linked programs contain all the object code and information they need, they tend to be larger.

There is another interesting aspect of linkers, in that there are both linkers which work at compile time, and linkers which perform symbol resolution during program loading, or even run time. Linkers which perform their work during compile time are called compile time linkers, while linkers which perform their work at load and run time are called dynamic linkers. Don’t confuse dynamic linkers and dynamically linked executable files. The information in a dynamically linked executable is used by a dynamic linker. One is your code (dynamically linked executable), the other helps your code run properly (dynamic linker).


There are two primary functions that are typically assigned to loaders, creating and configuring an environment for your program to execute in, and loading your program into that environment (which includes starting your program). During the loading process, the dynamic linker may come into play, to help resolve symbols in dynamically linked programs.

When creating and configuring the environment, the loader needs information that was generated by the compile time linker, such as a map of which sections of memory should be marked as executable (i.e. can contain directly executable code), as well as where various pieces of code and data should reside in memory. When loading a dynamically linked program into memory, the loader also needs to load the libraries into the process environment. This loading can happen when the environment is first created and configured, or at run time.

In the process of creating and configuring the environment, the loader will transfer the code (and initialized data) from the executable file on disk into the environment. After the code and data have been transferred to memory (and any necessary modifications for load time relocation have been made), the code is started. In essence, the way the program is started is to tell the processor to execute code at the first instruction of the code (in memory). The address of the first instruction of code (in memory) is called the entry point, and is typically set by the compile time linker.

Further reading

Well, that about wraps it up for this introduction. There are some good books out there on compilers, linkers, and loaders. The classic book on compilers is Compilers: Principles, Techniques, and Tools. I have the first edition, and it is a bit heavy on the computer science theory (although I just ordered the second edition, which was published in August 2006). The classic book on linkers and loaders is Linkers and Loaders. It too is a bit more abstract, but considerably lighter on the theory. If you’re examining code in a Windows environment, I’d also recommend reading (at least parts of) Microsoft Windows Internals, Fourth Edition.

The basics of how digital forensics tools work

I’ve noticed there is a fair amount of confusion about how forensics tools work behind the scenes. If you’ve taken a course in digital forensics this will probably be “old hat” for you. If on the other hand, you’re starting off in the digital forensics field, this post is meant for you.

There are two primary categories of digital forensics tools, those that acquire evidence (data), and those that analyze the evidence. Typically, “presentation” functionality is rolled into analysis tools.

Acquisition tools, well… acquire data. This is actually the easier of the two tools to write, and there are a number of acquisition tools in existence. There are two ways of storing the acquired data, on a physical disk (disk to disk imaging) and in a file (disk to file imaging). The file that the data is stored in is also referred to as a logical container (or logical evidence container, etc.) There are a variety of logical container formats, with the most popular formats being: DD (a.k.a. raw, as well as split DD) and EWF (Expert Witness, a variant used with EnCase). There are other formats, including sgzip (seekable gzip, used by PyFlag) and AFF (Advanced Forensics Format). Many logical containers allow an examiner to include metadata about the evidence, including cryptographic hash sums, and information about how and where the evidence was collected (e.g. the technicians name, comments, etc.)

Analysis tools work in two major phases. In the first phase, the tools read in the evidence (data) collected by the acquisition tools as a series of bytes, and translate the bytes into a usable structure. An example of this, would be code that reads in data from a DD file and “breaks out” the different components of a boot sector (or superblock on EXT2/3 file systems). The second phase is where the analysis tool examines the structure(s) that were extracted in the first phase and performs some actual analysis. This could be displaying the data to the screen in a more-human-friendly-format, walking directory structures, extracting unallocated files, etc. An examiner will typically interact with the analysis tool, directing it to analyze and/or produce information about the structures it extracts.

Presentation of digital evidence (and conclusions) is an important part of digital forensics, and is ultimately the role of the examiner, not a tool. Tools however can support presentation. EnCase allows an examiner to bookmark items, and ProDiscover allows an examiner to tag “Evidence of Interest”. The items can then be exported as files, to word documents, etc. Some analysis tools have built in functionality to help with creating a report.

Of course, there is a lot more to the implementation of the tools than the simplification presented here, but this is the basics of how digital forensics tools work.