Silence…

It’s been quite some time (over a month) since I made a post (to here or the forensically sound yahoo group). I’ve had a whirlwind of client work, including teaching at a number of SANS conferences. I did get a bit of press coverage while at the San Jose SANS conference. The press came to me seeking information about Schwarzenegger’s ethnic comments. Let me state that I have absolutely NO knowledge about the issue outside of what has been presented in the news. I have not been contacted by anyone other than NBC11 about this.

A reporter from NBC11 showed up at the end of one of the days of the conference, and asked me some questions about what is and is not possible. The “interview” made an 8 second blurb onto the nightly news, and here is a link to a web write-up of the “interview”. To clarify the comments made by me, they were pieced together from multiple different sentences. I did not see anything on the govenor’s website that appeared to be related to the ethnic comments. I did not perform a thorough search.

On a slightly different note, I registered the domain forensiccomputing.us and pointed it at this blog. 🙂

“Forensically Sound Duplicate” (Update)

So after the whirl of feedback I’ve received, we’ve moved discussions of this thread from Richard Bejtlich’s blog to a Yahoo! group. The url for the group is: http://groups.yahoo.com/group/forensically_sound/

We now return this blog to it’s regularly scheduled programming… 🙂

“Forensically Sound Duplicate”

I was reading Craig Ball’s (excellent) presentations on computer forensics for lawyers at (http://www.craigball.com/articles.html). One of the articles mentions a definition for forensically sound duplicate as:

“A ‘forensically-sound’ duplicate of a drive is, first and foremost, one created by a method which does not, in any way, alter any data on the drive being duplicated. Second, a forensically-sound duplicate must contain a copy of every bit, byte and sector of the source drive, including unallocated ’empty’ space and slack space, precisely as such data appears on the source drive relative to the other data on the drive. Finally, a forensically-sound duplicate will not contain any data (except known filler characters) other than which was copied from the source drive.”

There are 3 parts to this definition:

  1. Obtained by a method which does not, in any way, alter any data on the drive being duplicated
  2. That a forensically sound duplicate must contain a copy of every bit, byte and sector of teh source drive
  3. That a forensically sound duplicate will not contain any data except filler characters (for bad areas of the media) other than that which was copied from the source media.

This definition seems to fit when performing the traditional disk-to-disk imaging. That is, imaging a hard disk from a non-live system, and writing the image directly to another disk (not a file on the disk, but directly to the disk.)

Picking this definition apart, the first thing I noticed (and subsequently emailed Craig about) was the fact that the first part of the definition is often an ideal. Take for instance imaging RAM from a live system. The act of imaging a live system changes the RAM and consequently the data. The exception would be to use a hardware device that dumps RAM (see “A Hardware-Based Memory Acquisition Procedure for Digital Investigations” by Brian Carrier.)

During the email discussions, Craig pointed out an important distinction between data alteraration inherent in the acquisition process (e.g. running a program to image RAM requires the imaging program to be loaded into RAM, thereby modifying the evidence) and data alteration in an explicit manner (e.g. wipe the source evidence as it is being imaged.) Remeber, one of the fundamental components of digital forensics is the preservation of digital evidence.

A forensically sound duplicate should be acquired in such a manner that the acquisition process minimizes the data alterations inherent to data acquisition, and not explicitly alter the source evidence. Another way of wording this could be “an accurate representation of the source evidence”. This wording is intentionally broad, allowing one to defend/explain how the acquisition was accurate.

The second part of the definition states that the duplicate should contain every bit, byte, and sector of the source evidence. Similar to the first part of the definition, this is also an ideal. If imaging a hard disk or other physical media, then this part of the definition normally works well. Consider the scenario when a system with multiple terabytes of disk storage contains an executable file with malicious code. If the size of the disk (or other technological restriction) prevents imaging every bit/byte/sector of the disk, then how should the contents of the file be analyzed if simply copying the contents of the file does not make it “forensically sound”? What about network based evidence? According to the folks at the DFRWS 2001 conference (see the “Research Road Map pdf“) there are 3 “types” of digital forensic analysis that can be applied:

– Media analysis (your traditional file system style analysis)
– Code analysis (which can be further abstracted to content analysis)
– Network analysis (analyzing network data)

Since more and more of the latter two types of evidence are starting to come into play (e.g. the recent UBS trial with Keith Jones analyzing a logic bomb), a working definiton of “forensically sound duplicate” shouldn’t be restricted to just “media analysis”. Perhaps this can be worded as “a complete representation of the source evidence”. Again, intentionally broad so as to leave room for explanation of circumstances.

The third part of the definition states that the duplicate will not contain any additional data (with the exception of filler characters) other than what was copied from the source medium. This part of the definition rules out “logical evidence containers”, essentially any type of evidence file format that includes any type of metadata (e.g. pretty much anything “non dd”.) Also compressing the image of evidence on-the-fly (e.g. dd piped to gzip piped to netcat) would break this. Really, if the acquisition process introduces data not contained in the source evidence, the newly introduced data should be distinguishable from the duplication of the source evidence.

Now beyond the 3 parts that Craig mentions, there are a few other things to examine. First of all is what components of digital forensics should a working definition of “forensically sound” cover? Ideally just the acquisition process. The analysis component of forensics, while driven by what was and was not acquired, should not be hindered by the definition of “forensically sound”.

Another fact to consider is that a forensic exam should be neutral, and not “favor” one side or the other. This is for several reasons:
– Digital forensic science is a scientific discipline. Science is ideally as neutral as possible (introducing as little bias as possible). Favoring one side or the other introduces bias.
– Often times the analysis (and related conclusions) are used to support an argument for or against some theory. Not examining relevant information that could either prove or disprove a theory (e.g. inculpatory and exculpatory evidence) can lead to incorrect decisions.

So, the question as to what data should and shouldn’t be included in a forensically sound duplicate is hard to define. Perhaps “all data that is relevant and reasonably believed to be relevant.” The latter part could come into play when examining network traffic (especially on a large network). For instance, when monitoring a suspect on the network (sniffing traffic) and I create a filter to only log/extract traffic to and from a system the suspect is on, I am potentially missing other traffic on the network (sometimes this can even be legally required as sniffing network traffic is considered in many places a wiretap). A definition of “forensically sound duplicate” shouldn’t prevent this type of acquisition.

So, working with some of what we have, here is perhaps a (base if nothing else) working defintion for “forensically sound”:

“A forensically sound duplicate is a complete and accurate representation
of the source evidence. A forensically sound duplicate is obtained in a
manner that may inherently (due to the acquistion tools, techniques, and
process) alter the source evidence, but does not explicitly alter the
source evidence. If data not directly contained in the source evidence is
included in the duplicate, then the introduced data must be
distinguishable from the representation of the source evidence. The use
of the term complete refers to the components of the source evidence that
are both relevant, and reasonably believed to be relevant.”

At this point I’m interested in hearing comments/criticisms/etc. so as to improve this defintion. If you aren’t comfortable posting in a public forum, you can email me instead and I’ll anonymize the contents 🙂

Self replicating software – Part 3 – Other methods

Up until now, this thread of posts has been rather theoretical, talking about Turing machines, etc. the only time there was some source code was for showing a program that can print out a description of itself (its source code).

Well, one problem with the self-replication method for getting a copy of a program’s description is that the program needs to contain a partial copy (and it computes the rest). This can cause some nasty code bloat. The method many viruses use is to copy themselves from another spot in memory.

For instance, if you have an executable file being run in memory (the host), and a portion of the executable code is a “virus” (a.k.a viral code), then by virtue of the fact that the host program is running in memory, a copy of the virus code is also in memory. All a virus needs to do is locate the start of itself in memory and copy it out. No need for the recursion theorem, no need to carry a partial copy of itself.

Some simple methods for finding where the virus is in memory are:

  1. Always be located at the same memory address
  2. When the virus is infecting another host file, embed in the host file the location of the virus
  3. Once the virus starts, locate the address of the first instruction
  4. Scan the host file’s memory for a signature of the virus

To accomplish the first method can require some careful forethought. In essence, the virus has to find a location in memory that is normally not taken by some portion of code or data. When the “virus” part of the executable code runs, it can copy itself out of memory since it knows where it starts (and assuming it also knows the size)

The second method requires that the virus (rather the virus’s author) understand a bit about the loading process that the executable file undergoes when it is loaded (assuming the virus is infecting a file on disk rather than a process running in memory). The virus can then “hard-code” the address of the viral code in the newly infected host. When the newly infected host file gets executed (e.g. by the user) and control gets transferred to the virus portion of the host, the virus knows where it is in memory since it was recorded by the previous incarnation.

The third method normally requires a bit of hackery. When the virus portion of the host file gets control of execution it needs to perform some sort of set of instructions to figure out where it is. The classic method of doing this in Intel x86 assembly is:

0x8046453: CALL 0x8046458
0x8046458: POP EAX

The first line calls a function which starts at the second line. The call instruction first pushes the address of the next instruction (which would be 0x8046458) onto the stack, and then transfers control to the operand of the call (which is also 0x8045458). The second line pops the last value that was pushed onto the stack (the address in memory) and saves a copy in the EAX register.

The fourth method simply requires that the virus have a static signature, and that once control of execution gets transferred to the virus portion of the host file, the virus scan memory until it finds the signature. Of course there is the possibility for a false positive, but by carefully choosing the signature, this could be reduced.

There is another method that I haven’t described, and that is that the virus can obtain a copy of itself from a file on disk. This generally works for script based viruses, but also has the potential to work for other types of executable files. This was the method used by the biennale.py virus.

This ends this thread of posts (at least for a little while). The original reason for creating this thread was to explain a little bit about how viruses work. This is useful for computer forensic analysts/examiners examining viruses, since understanding the basics goes a long way when examining actual specimens. Of course there is much more than the simplification given in this series of posts, but hopefully this can provide some guidance.

Self replicating software – Part 2 – Recursion theorem proof

In this post I’ll cover the proof of the Recusion theorem (see Self Replicating Software – Part 1 – The Recursion Theorem).

The proof for the Recursion theorem is a constructive proof, meaning that a Turing Machine (TM) that can reference its own description is constructed. This proof was taken from Michael Sipser’s “An Introduction to the Theory of Computation”.

Before we get into the proof of the Recursion Theorem, we’ll need a special TM. There is a computable function q which can print out the description of a TM that prints the input that was given to q.

So, if we have a machine P_w, that prints out w (where w is input to the machine), q(w) prints out the description of TM_w. The construction of such a machine is relatively simple. Lets call the machine that computes q(w) Q_w. Here is an informal description:

Q_w = "
Step 1: Create a machine P_w ="
Step 1.1: print w"

Step 2: Print <P_w>

Recall that <P_w> is notation for “The description of the Turing Machine P_w”.

Last time, we had a TM called TM_T that took two inputs, one being the description of a TM, the other input called w, and performed some computation with it. We also had another TM called TM_R which took one input, namely the same w passed to TM_T and performed the same computation as TM_T (on the description of a TM) but with it’s own description (<TM_R>) filled in. Since TM_R only takes one input, w, the description of TM_R is internal to TM_R.

The TM_R is constructed in a special manner, namely it has 3 components. Let’s call these TM_A, TM_B, and TM_T where TM_T is the same TM_T referenced before. TM_A is a special TM that generates a description of TM_B and TM_T. TM_B computes the description of TM_A. The tricky part is to avoid creating a cyclic reference. In essence, you can’t have TM_A print out <TM_B><TM_T> and then have TM_B print out <TM_A> since they reference each other directly. The trick is to have TM_B calculate <TM_A> by using the output from TM_A.

Think about this abstractly for a moment, if TM_A were to print some string w, then <TM_A> is simply q(w) (a.k.a. the description of a machine that prints w). In case of the recursion theorem, TM_A is the machine that prints out <TM_B><TM_T> and TM_B prints the description of a machine that prints out <TM_B><TM_T>

So, the proof goes as follows:

TM_R(w) = "
TM_A which is P_<TM_B><TM_T>
TM_B which is Q_w
TM_T which is specified ahead of time

Step 1: Run TM_A
Step 2: Run TM_B on the output of TM_A
Step 3: Combine the output of TM_A with the output of TM_B and w
Step 4: Pass control to TM_T
"

Think about this, TM_A is a machine that prints out <TM_B><TM_T>. <TM_B> computes the description of a machine that prints out <TM_B><TM_T>. Note the following:

TM_A = P_<TM_B><TM_T>
<TM_A> = <P_<TM_B><TM_T>>
<P_<TM_B><TM_T>> = Q(<TM_B><TM_T>)

TM_B = Q(w) where w is <TM_B><TM_T>

If TM_T prints the description, then TM_T is called a quine. For TM_T to be a virus, it would “print” the description to a file (somewhere on the tape).

We’ve covered a fair amount of theory between this post and the last post so let’s put it into some concrete code. In this example, TM_T will just print the description of the machine. This type of program is known as a quine or selfrep.

[lost@pagan code]$ grep -n ".*" quine.py
1:#!/usr/bin/python
2:
3:def TM_T(M):
4: print M;
5:
6:def TM_B(w):
7: computedA = '# Below is TM_A' + chr(0xA) + 's = ' + '"' + '""' + w + '"' + '"";' + chr(0xA) + 'TM_B(s);';
8: TM_T(w + computedA);
9:
10:# Below is TM_A
11:s = """#!/usr/bin/python
12:
13:def TM_T(M):
14: print M;
15:
16:def TM_B(w):
17: computedA = '# Below is TM_A' + chr(0xA) + 's = ' + '"' + '""' + w + '"' + '"";' + chr(0xA) + 'TM_B(s);';
18: TM_T(w + computedA);
19:
20:""";
21:TM_B(s);
[lost@pagan code]$ clear
[lost@pagan code]$ grep -n ".*" quine.py
1:#!/usr/bin/python
2:
3:def TM_T(M):
4: print M;
5:
6:def TM_B(w):
7: computedA = '# Below is TM_A' + chr(0xA) + 's = ' + '"' + '""' + w + '"' + '"";' + chr(0xA) + 'TM_B(s);';
8: TM_T(w + computedA);
9:
10:# Below is TM_A
11:s = """#!/usr/bin/python
12:
13:def TM_T(M):
14: print M;
15:
16:def TM_B(w):
17: computedA = '# Below is TM_A' + chr(0xA) + 's = ' + '"' + '""' + w + '"' + '"";' + chr(0xA) + 'TM_B(s);';
18: TM_T(w + computedA);
19:
20:""";
21:TM_B(s);

[lost@pagan code]$ ./quine.py | md5sum; md5sum quine.py
dd9958745db4735ceab8101cae0a15a8 –
dd9958745db4735ceab8101cae0a15a8 quine.py
(Note the md5sum of the output and the md5sum of the input are the same)

Let’s analyze this piece by piece. In this program, TM_A is represented by lines 11 – 21. TM_A is the description of TM_B and TM_T. Notice that TM_A is a copy of (the descriptions of) TM_B and TM_T and stuck in triple quotes (one method in Python for representing strings).

TM_B is on lines 6-10 (including the comment “# Below is TM_A”). Recall that TM_B computes the description of TM_A using the output from TM_A. The input to TM_B is the output of TM_A, which was the description of TM_B and TM_T. Finally, the output from TM_B is transferred to TM_T which prints the contents.

Here is the logic flow:

Lines 11-20 calculate P_<TM_B><TM_T>
Line 21 takes the output (s) and passes it to TM_B
Line 7 computes the description of TM_A (computedA) from the output of TM_A (which was passed to TM_B)
Line 8 combines the computed description of TM_A with the output of TM_A and passes the whole string to TM_T.
Line 4 prints the description of the machine

In the case of a virus, instead of printing the description to the screen, a virus would print the description to another python file.

What we’ve done is prove (from a theoretical point of view) why viruses can exist. The fact viruses exist already tells us this, but we’ve now proven from an abstract view that they do exist. We’ve also given a basic algorithm for creating viruses, and shown how to implement something simple in Python.

In the next post we’ll cover another method (perhaps more commonly used) a virus can use to get it’s own description (which doesn’t depend on the recursion theorem).

Self replicating software – Part 1 – The Recursion Theorem

This is the first in a multi part post about computing theory and self replicating software. This post assumes you have knowledge and understanding of a Turing Machine (abbreviated TM). If you aren’t familiar with Turing Machines (TMs) then you may want to take a look at the Wikipedia entry on the topic at http://en.wikipedia.org/wiki/Turing_machine before continuing. Much of the material for this post comes from Michael Sipser’s “An Introduction to the Theory of Computation”.

Let’s assume there is a computable function t which takes two inputs and yields one output. This function can be stated as:

t: E* x E* -> E*

Call the TM that implements this function TM_T. Now if one of the inputs that TM_T takes is the description of a TM (perhaps a description of TM_T or the description of another TM) we can call TM_T a “Meta TM”, meaning it operates on TMs (more accurately, descriptions of TMs). TM_T could do a number of things with the description, such as emulate it (making TM_T a universal TM), or print the description to the TM’s tape (making TM_T a “quine”). For the purpose of the recursion theorem, it’s irrelevant what TM_T does with the description. So TM_T can be written as taking two inputs:
TM_T(<TM>, w)

Where <TM> is a description of a TM. The other argument (w) could be input to the TM described by the <TM> argument.

Now, assume there is another computable function r which takes one parameter, w, the same w passed as the second argument to function t. Function r can be stated as:

r: E* -> E*

Call the TM that implements this function TM_R. So TM_R can be written as taking one input:
TM_R(w)

The TM_R is described by <TM_R>.

The recursion theorem states (in terms of functions r and t):

r(w) = t(<TM_R>, w)

alternatively

TM_R(w) = TM_T(<TM_R>, w)

This means that the ouput from TM_T given a description of TM_R and input w yields the same output as TM_R with input w. Think about the implications of this for a moment. If TM_T prints the description of the TM passed to it (which in this case is the description of TM_R), then TM_R can do the same thing without having the description passed in explicitly, instead TM_R can calculate <TM_R>. In essence TM_R could print out a description of itself, without having the description given to it.

I’ll post the proof and examples in Python in later entries. The recursion theorem can allow a virus to obtain a description of itself and “print” the description into a file. Although, there are other methods that virii can use to obtain descriptions of themselves, which will also be covered in later posts.

Naming structure of recycle bin files

Was doing some research on the structure of the Windows Recycle Bin, and found an interesting article over at Microsoft. It talks about the naming structure of the files in the Recycle Bin directories. In essence, the structure is as follows:

D<drive letter from original path><order #>.<original extension>

The field is a number signifying when the order the file was deleted since the recycle bin was last emptied.

For example, lets say I empty the recycle bin and then delete the following files (in the order shown):

c:somefile.txt
e:document.doc
d:picture.jpg
f:suspect_file.txt

Then, in the recycle bin directory for my user, I would find the following file names:

Dc1.txt
De2.doc
Dd3.jpg
Df4.txt

This is probably already well known, thought it would be interesting to share.

Base+Offset notation (or why we start counting with zero)

Every now and again, I get the question about why we starting counting things such as arrays, offsets, etc. with zero (0) and not one (1). The answer is simple, when specifying a data structure, we normally specify the byte (or whatever unit) offset for the start of a field for a specific data structure. Think how a computer would get to a specific element of a list. Let’s say we want the third element in the list (the offset is 2). The start of the list is the first element. We move to the next element, which is at offset 1, and then we move to the next element (the third item) which is offset 2. We now read the contents of that element. When we start counting with offsets, the first byte is zero bytes from the start (since it’s the first byte). Here is what it looks like graphically:

0 1 2 3 4 5 6
A B C D E F G
^     (offset from A = 0)

Let’s say we want to get to element C (the third element, offset 2). If “A” is the start, then it is zero bytes from the start. That means, that A’s offset is 0. There is a caret (^) beneath “A” to show the current position. Next we move forward one element (offset = 1) to the second element, which looks like:

0 1 2 3 4 5 6
A B C D E F G
^   (offset from A = 1)

Finally, we move forward one more element (offset = 2) to the third element, which looks like:

0 1 2 3 4 5 6
A B C D E F G
^ (offset from A = 2)

For example, the FAT boot sector starts out like this: (Note: this was taken from http://support.microsoft.com/kb/q140418/)

Field                 Offset     Length
-----                 ------     ------
Bytes Per Sector      11         2
Sectors Per Cluster   13         1
Reserved Sectors      14         2
FATs                  16         1
Root Entries          17         2
Small Sectors         19         2
Media Descriptor      21         1
Sectors Per FAT       22         2
Sectors Per Track     24         2
Heads                 26         2
Hidden Sectors        28         4
Large Sectors         32         4

Notice that the “Bytes per Sector” field (which tells us the size of a sector, usually 512) is at offset 11. This means that this field starts at offset 11 (a.k.a. 11 bytes from the start, the 12th byte, etc.) where the offset of the first byte is 0 (the start).

Argument for MD5

So, there has been a lot of talk over the past few years about using MD5 hash sums in digital forensics, due to the fact that some collisions have been found for MD5.

First, a hash algorithm/function has the following properties:
1) The algorithm takes in a variable sized input data and transforms the data into a fixed size output (usually smaller).

2) The algorithm is one-way, meaning to calculate a hash sum of the input data is relatively easy, while to reverse the process, to calculate the original input data given the hash sum is hard (a.k.a. computationally infeasible)

3) The algorithm is collision-free, meaning that different inputs to the algorithm will yield different outputs.

A nice write-up from RSA labs about hash functions can be found at http://www.rsasecurity.com/rsalabs/node.asp?id=2176.

Now the third property is somewhat of an issue. Due to the pigeon hole principle (http://en.wikipedia.org/wiki/Pigeonhole_principle) a hash function with a fixed output size is never truly collision-free. Here is the rationale:
– Assume a hash function with a fixed output size N. Call the set of possible values that the hash function maps to (the range) B

– Take the set of unique inputs to the hash function, where there are at least N+1 elements. Call this set A.

– Since a hash function maps A to B, there is guaranteed to be a collision, since there are more inputs than there are unique outputs (|A| > |B|).

So, since we have collisions with MD5 does this mean that all of the criminals convicted with digital evidence will run free? Have they all been wrongly convicted? I would guess the answer is no.

First there is the inherent trust in the data collection, handling, processing, analysis, and presentation. This is where things such as chain of custody, and trust in an examiner come into play. While nothing will create a deductive chain of logic that we can rely on for the integrity of evidence, current practices seem not to fail either.

Second, the attacks against MD5 have been against the aspect of being collision free. Nested in the concept of being collision free are two additional concepts:

3.1) A hash function is weakly collision-free if given an input X, then it is computationally infeasible to find another input Y, where X != Y, yet both produce the same hash sum. This is also called a “pre-image attack” by the folks at NSRL.

3.2) A hash function is strongly collision-free if it is computationally infeasible to find to different inputs X and Y that produce the same hash sum.

The key difference between 3.1 and 3.2 is that 3.1 has a pre-specified initial input X, and consequently a pre-specified hash sum to find a match while 3.2 prescribes that is is computationally infeasible to find two different inputs which yield the same output. The subtle difference is one you’re trying to find an input that will yield a given output, while the other just says find two inputs that have the same output.

The role of MD5 in helping show preservation of evidence integrity in digital forensics is the fact that it is weakly collision-free. Courtesy of the pigeon-hole principle, it has been known that there ARE collisions, just finding them was rather time consuming. The attacks against MD5 were focused on 3.2, showing that MD5 was not strongly collision-free.

Here is a deductive argument:

Premise 1:
If you have a hash algorithm suitable for digital forensics [A], then it is weakly collision-free [B] (don’t worry about strongly collision-free, we won’t use it).
A -> B

Premise 2:
If a hash algorithm is weakly collision-free [B] and given an input [C] then it is computationally infeasible to find another unique input with the same hash sum [D] .
(B ^ C) -> D

Premise 3:
The MD5 algorithm is suitable for digital forensics [A]
A

Premise 4:
A bit-stream image of evidence is an input [C]
C

Argument:
The MD5 algorithm is weakly collision-free.

Proof:
1. A (by Premise 3)
2. A -> B (by Premise 1)
3. ((A -> B) ^ A) (by Conjunction of Premise 3 and 1)
4. B (by Modus Ponens and 3)

Argument:
Given a bit-stream image of evidence, it is computationally infeasible to find another input that yields the same MD5 hash sum.

Proof:
5. C (by the argument)
6. B ^ C (by Conjunction of 4 and 5)
7. (B ^ C) -> D (by Premise 2)
8. ((B ^ C) -> D) ^ (B ^ C) (by Conjuction of 6 and 7)
9. D (by Modus Ponens and 8)

Of course, the underlying premise that is up for attack is Premise 3, which states that MD5 is suitable for digital forensics. Notice the recent attacks don’t come into play, since they focus on the strongly collision-free aspect of MD5. The folks at NSRL also stated:

“none of the attacks are pre-image attacks. In a pre-image attack, an attacker
must manufacture a file with a previously known hash. All of the current attacks
are only able to produce two files with the same hash, but it is a random hash. “
(http://www.nsrl.nist.gov/documents/analysis/draft-060530.pdf)

There are however projects underway to undermine the weakly collision-free aspect of the MD5 algorithm. Notably, the folks over at the Digital Forensics Reasearch Workshop a.k.a. DFRWS have issued a challenge (http://www.dfrws.org/hashchallenge/index.html).

So in summary, MD5 is still suitable for digital forensic use, although it might be advisable to use an additional algorithm such as SHA-1, SHA-256,

The switch to Levenger

For years I’ve carried around a small notebook, one of the spiral bound that is almost a 3×5 card size. I’ve even got a nice leather cover for them somewhere at my Dad’s house. I normally use the notebook to do things like take case notes, observations, grocery lists, etc. The most recent notebook had been so used and abused it was actually held together by duct tape.

While at the Denver SANS conference, I noticed one of the students was using a Levenger pocket briefcase to take notes. After seeing how well this system worked out for him, I decided to peruse the Levenger website (Levenger.com) and bit the bullet and ordered a black Levenger Flip Pocket Briefcase (http://www.levenger.com/PAGETEMPLATES/PRODUCT/Product.asp?Params=Category=11-76|PageID=2458|Level=2-3#).

All I can say is the device is great. I carry it with me everywhere, keep case notes on it, and pretty much everything. I haven’t abandoned my higher-tech devices such as the Blackberry, but the pocket briefcase is much more efficient for that on-the-spot information gathering.