Monday, July 18, 2011

How Do I Get There From Here?

I have had several people ask me lately, "How do I get there from here"? Not referring to the Country song by Deana Carter, but referring to how to get a job as a forensic investigator/incident responder. So, after thinking about it, here are some ideas that I outlined that help me get to where I am today. Hopefully, you will find them helpful as well.
First of all, you need a good attitude. You need to leave your ego or any overinflated sense of superiority at the door. Some of the absolute BEST people in this industry...guys like Harlan Carvey, Rob Lee, Ovie Carroll, Cory Altheide, Hal Pomeranz, Chad Tilbury, Lenny Seltzer, Jesse Kornblum, Colin Sheppard, Chris Hague, Jibran Ilyas, Grayson Lenik and Eric Huber all share a common trait...Humility. I bet if you asked any one of them if they were good at what they do, you would likely get some variant of the response, "I sure try, but there is always so much to learn!"

They know they do not know everything, and work hard keep current on emerging concepts and technologies . I have met them all, and there is absolutely NO pretense in any of these industry giants. Also, they are passionate about their work, and love what they do. They are the best because they work the hardest. Period.

You also need to be flexible. The slogan of this industry is "semper gumby" - always flexible. You need to be able to adapt to constantly changing situations, emerging evidence, difficult customers, challenging time tables, and extensive travel. Don't be too rigid, or get frustrated when things either change unexpectedly, or don't turn out as planned.
And Travel...loooooots of travel. As an example, I am writing this in the airport during week three of a seven week travel spree. You will travel...a LOT...so get used to it.

Second, you have to be wired for this kind of work. By, "wired", I mean you just have to "get" technology. You have to have a knack for computers beyond the skills and abilities of what would commonly be referred to as a "normal" end user. You cannot be scared by the command line, Linux, Mater Boot Records, Master File Tables, the Windows Registry, the OSI model, Perl, Ruby, and/or Python (just to name a few). You need to be able to read, comprehend, and figure stuff out. You should know what you are looking at, why, and be able to explain it to anyone. In short, you need to be either inherently smart, or prepared to work really hard (I fall into the latter category - not the smartest dood in the room, but I think I work as hard as, or harder than just about anyone). In my opinion, having a concrete foundational knowledge is essential for the job, and is really the difference maker between someone who is OK at the job, and someone who is really good. So never stop learning!

Remember, knowing how to use a tool (any tool) no more makes you an investigator, than knowing how to use MS Word makes you Stephen King. It's a tool that does something...NOTHING more. It's the expert set of eyes on the screen and the expert fingers on the keyboard that make up the expert.

Third, you need a desire to find the truth. The evidence is there (usually), and it's up to you to find it, and interpret it properly. Also, there is a famous quotes by Dr. Carl Sagan who stated, "The absence of evidence is not the evidence of absence". Remember, it is the job of the investigator to identify and properly interpret the evidence.

These are the precepts you should hang your "hat" on. Find the truth. Dig it out of every registry hive, file system, unallocated cluster, slack space, and network capture you can find.
Along those lines, Harlan and I were recently having a discussion over breakfast about context. The basic results were that many investigators will jump to conclusions based on a single data point without building appropriate context around that data point. Why is it there? What does it mean? Am I drawing conclusions based on theory or fact? Are there other data points that all indicate the same "thing" took place. For us, best practice is to identify at least three data points that all point in the same direction. This will give the investigator confidence in what they found (that it is indeed accurate), and give weighting to the evidence.

This is something I touch on in Sniper Forensics. NEVER EVER form your opinion about what happened and try to make the data fit your theory. Let the data formulate your theory, and allow your investigation to flow with the evidence. You may change directions numerous times. Doing so doesn't mean you are wrong, or a bad investigator. It means you know enough to allow the evidence to guide the investigation. It's a complex, fluid combination of art and science, and if it were easy, everybody would do it and be good at it.

OK...so now that we have covered some of the basics regarding attitude, and some philosophical essentials, let's talk about education. You need it. Personally, I am not a huge fan of the forensic degree programs currently be taught at many universities. From what I have seen, they teach tool use, and maybe a little theory. Which is good, but not something that is going to equip an investigator for a successful career in the field. I would LOVE to see them teach the history of forensic science, logic, investigative methodology, technical writing, research methodologies, public speaking, conflict resolution, and systems administration. These are the key proponents of a solid investigator...not knowing how to use a tool! If you have the opportunity to take any class that covers these topics, I would HIGHLY recommend doing so. You would be amazed if I told you how relevant my Pre-Socratic Philosophy class is to my job! Or how much better my reports are after taking a technical writing course. The independent research I have done on expert witness testimony has made me better prepared to speak on the stand. Taking a class that certifies you in how to use a certain tool...ya...not gonna teach you ANY of those things...I'm juuuuuuuuuuuust sayin...

In my opinion, if you are looking into a degree program, take something that is going to teach you what "normal" looks like. Get a general IT degree that is well rounded with courses in Windows, Linux, networking, midrange, and emerging technologies. You can learn the tools later, knowing the basics will serve you far better in the field.

I am a fan of technical certifications...sort of. I have several, and I feel like I got something out of studying for, taking, and passing the requisite examinations. I think the subject matter is relatively small (compared to the larger IT world), focused, and can help to contribute to your subject matter expertise in a specific area.

Now, I am only partially a fan of certifications for a couple of reasons. I know several people who have multiple certifications, and are crummy investigators. Alternatively, I know several people who have few or no technical certifications, who are fantastic investigators. Again, those little letters after your name don't make you a good investigator. They mean you paid some money, sat in a class, and passed an exam. Nothing more. If you have multiple certs...good for you...don't get a big head about it. If you don't have any...don't let it discourage you. They are what they are...indicators that you took a class and passed a test.

Don't get me wrong, from a business perspective, technical certifications go a long way in establishing you as a subject matter expert (some contracts I have worked on even required them). Also, they can show prospective employers that you are serious about your trade, and have taken steps to set yourself apart from other applicants. But don't ever think that just because you have a cert and someone else doesn't that you are "better" than they are. It's simply not the case...ever...and it's just going to make you look like a jerk. I recommend taking the approach that you love the trade and want to learn as much as you can about it. You are fortunate enough to have the resources necessary to attend the class and take the exam. It was a great experience, and you feel that you have benefitted from the knowledge you gained. BUT, you realize that the forensics/IR world is a big place with a LOT to learn, and you are eager to be engaged in any way you can (recognize your efforts without breaking your arm patting yourself on the back...good skill to have). If you are good at what you do, your actions will speak far louder than any certifications ever could.

Next, know that you are going to have to interact with customers....a lot. You are going to have to explain some very technical concepts to non-technical people - not stupid, just not technical. You are going to have to deal with angry lawyers, crying business owners, demands, fear, and uncertainty. Basically, every new case, is everyone's worst day. You need to become skilled in situational analysis, leadership, public speaking, and incident management. You will have to learn how to walk the line (a very fine line sometimes) between confidence and arrogance. This is a difficult concept to learn, and honestly after studying it in both my undergrad and graduate degree programs, at Warrant Officer Candidate School, and reading books about it...it's something you are going to have to experience to get good at. At least by doing to research on it, you can better prepare yourself, and decrease the time it's going to take you to become proficient.

I also recommend reading Dale Carnegie's, How to Win Friends and Influence People at least once per year. Take good notes, and use them. It has a wealth of information and has been THE standard for interpersonal business relationships for almost 100 years. Also, realize that at the end of your contract is a person...a human being. This is their business, or their company...their livelihood. This is how they put a roof over their head, food on their table, and their kids through school. Be cognizant of that, and empathetic to their situation.

Finally, I will share some personal details about how I broke into the industry. When I was a sysadmin I got bored. You can only makes things work so well, and know how to troubleshoot so much, before it becomes mundane. That was the case with me...I was a Solaris and Windows admin at a decently sized IT shop and I was pretty good. My systems ran well, I could troubleshoot quickly and efficiently...and I was bored to tears. So, I searched internally for openings doing something different and I came across a posting for the Ethical Hacking Team. I had all of the required skills (networking, Linux, Windows), no different than any of the other applicants. But, what I had that they did not was raw desire. I wanted this job more than anything. I read anything I could get my hands on that dealt with the subject, spent my own money setting up a makeshift lab to play with tools, and perform experiments. I ooozed enthusiasm. I ended up getting the job. After I was hired, I asked my new manager what was it about me that ended up landing me the job? She told me something I have never forgotten to this day...

"Chris, I can teach you how to use the tools. The other folks on the team can teach you how to go after certain targets, what to look for, and how to run exploits. What I can't teach is enthusiasm. I know that you will be one of my best pentesters in a year simply because you want to be. I firmly believe you wanted the job more than anyone else."

So, while being passionate may not land you the job, it will set you apart from other applicants. Read, research, study, conduct experiments. Learn something new every day. Learn how to use open source tools (which is like 99% of what I use). Learn about forensic theory, investigative methodology, and logic. Learn how to write reports, how to deal with difficult situations and difficult people, and how to LISTEN! Most of all, love the work!

I hope you find this information helpful. If you have any specific questions, please feel free to email me at any time. I am always willing to help!

Happy Hunting!

Friday, July 15, 2011

Log2Timeline and Super Timelilnes

With the recent release of Kristinn Gudjonsson's Log2Timeline v.60, oddly named, "The Killer Dwarf" (Ya...you had to be there), generating Super timelines has now become easier than ever. However, before we get into the technical specifics of exactly HOW this is done, let's cover the two divergent theories about timelines.

For the purposes of this post, I will refer to the two groups as the Hogs and the Budgies. Yes...I know I am terrible at naming things, but after you hear my rationale behind these names, you will at least know my thought process. First of all, both sides agree that timelines should be made. In fact, I am not entirely sure how I ever conducted an investigation without making a timeline, and I am even less sure about how anyone currently conducting investigations can think they are doing a comprehensive job without timelines! The separation in philosophies comes from exactly what data elements to include in the timeline.

Hogs want to include everything...file system data, event logs, registry last write times, application logs...whatever you have, throw it in there. The theory is, I am not entire sure what I will need, or what I really want to see so just show me everything and I will decide later.

Budgies are the exact opposite...they want to see a much smaller data sample. Presumably, they know precisely what is that they want, and only want to see that data.

I categorize myself as a Flying Pig, because what I want to look at changes from case to case. Sometimes, I only need data from the active file system, while other times, I want to maybe see the event logs, and just the system hive last write times.

I think it's OK to be a Flying Pig, and in my opinion, a good marriage of including just the right data elements into your timeline. If you are new to making forensic timelines, my recommendation is to be a Hog. Gather all of the data you can and throw it into your super timeline. Hopefully, as you get more and more familiar with what data provides value to your investigations, you will get better at determining which elements to include. The fact that you are doing timelines at all, sadly puts you in a very small (yet hopefully growing) number of investigators...so keep it up, however you choose to do it.

Now, on to the technical goodness!

Getting Log2Timeline to run properly in Windows was a bit of a challenge. I worked with Kristinn for about a month tweaking perl modules until we finally got a final product that worked properly.

To start with, go to www.log2timeline.net and download the latest version, and the Windows install guide. Once you have the files, unpack them into your tools directory and follow the install guide. I am not going to say much more about that here, other than I KNOW for a fact that it works...since I am the one that wrote it =). So if you follow it step by step, you should not have any problems.

What makes the newest release of Log2Timeline really powerful is the addition of the recurse option. This means that you can throw all of the data you want added to your timeline into a single directory, and use Log2Timeline to recurse through that directory and add any applicable files to the timeline.

Arguably just as important and powerful of a change is the addition of file carving functionality with plugin grouping (much like Harlan Carvey uses in Reg Ripper).

For example...let's say you acquire volatile data from a Windows XP System. You have the event logs, the registry hives, a couple of ntuser.dat files, and the Master File Table. You can chunk (yes...that is an Oklahoma term) them all into a single directory and use the following command syntax.

c:\>tools\log2timeline> perl c:\tools\log2timeline\log2timeline.pl -m "keyword" -z CST6CDT -r vol -f winxip -w c:\cases\\timeline\supertimline.csv

Let's take a look at these options one by one.

The -m option allows you to put in a keyword. Normally, I use the hostname and the drive letter...for example...cpbeefcake_win7_c:\. This can be anything that will allow you to quickly and easily distinguish one timeline from another.

The -z option allows you to set the timezone for the timeline. This step cannot, and should not be skipped. While I live in the central timezone, I work cases in multiple other timezones. By default, if you don't specify Log2Timeline will use the timezone of the localhost. Now, if the case you are working is say in Pacific Standard Time, and your timeline gets generated in Eastern Standard Time, your timeline will be off by as many as four hours! That is a HUGE margin of error, and will no doubt mess with the accuracy of your findings.

The -r option, we talked about briefly, but it is used to recurse through a directory. Log2timeline uses file carving to identify the header of all of the files in the directory. Once it obtains that data, it compares the headers to the known headers for the various plugin types. If the header is recognize, it will automatically load the appropriate plugin, and parse the chronological data from the file and put it into the timeline (pretty sweet!).

The -f option identifies the file type. This can either be the specific file type (if you are only parsing a single file) or a set of plugins if you are parsing the files from a specific operating system. In my example, I used the "winxp" plugin, which automatically loads all of the plugins needed for a Windows XP system.

The -w is the write option. This tells the tool where to write the output file...pretty basic. By default, the tool writes the output in CSV format. DO NOT append the .csv file extension to the output file. I am not sure why this hoarks up the output file, but it does. For some reason, the column headers are left off file and the l2tprocess will fail. I need to get with Kristinn on this.

If done correctly, your column headers should look like this...

c:\tools\test>strings 1

date,time,timezone,MACB,source,sourcetype,type,user,host,short,desc,version,filename,inode,notes,format,extra

Now, if you want to, you can append like the contents of the Master File Table, or a timeline you created with Mactime to your initial output file. Again, since Log2timeline outputs into CSV format by default, you would need to append the final output from mactime, and not a bodyfile generated from FLS.

After you have your super file the way you want it, with all of the data you want it in, you will need to make sure the file is in chronological order, since Log2Timeline will simply add the data to the super file in sequential order (in the order it was read, or appended).

To do this, use the following command...

c:\tools\log2timeline>perl l2t_process -b super > supertimeline

The l2tprocess will chronologically arrange the data from the super file into the correct order, with the first entry at the top, and the last entry at the bottom. Pretty nice!

Another great feature is the ability to use the MFT in the supertimeline! Check it...

date,time,timezone,MACB,source,sourcetype,type,user,host,short,desc,version,filename,inode,notes,format,extra

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$SI [MACB] time,-,-,/$MFT,/$MFT,2,/$MFT,0, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$FN [MACB] time,-,-,/$MFT,/$MFT,2,/$MFT,0, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$SI [MACB] time,-,-,/$MFTMirr,/$MFTMirr,2,/$MFTMirr,1, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$FN [MACB] time,-,-,/$MFTMirr,/$MFTMirr,2,/$MFTMirr,1, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$SI [MACB] time,-,-,/$LogFile,/$LogFile,2,/$LogFile,2, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$FN [MACB] time,-,-,/$LogFile,/$LogFile,2,/$LogFile,2, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$SI [MACB] time,-,-,/$Volume,/$Volume,2,/$Volume,3, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$FN [MACB] time,-,-,/$Volume,/$Volume,2,/$Volume,3, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$SI [MACB] time,-,-,/$AttrDef,/$AttrDef,2,/$AttrDef,4, ,Log2t::input::mft,-

02/26/2009,20:51:34,CST6CDT,MACB,FILE,NTFS $MFT,$FN [MACB] time,-,-,/$AttrDef,/$AttrDef,2,/$AttrDef,4, ,Log2t::input::mft,-

You see the $SI and $FN in column eight? That's right baby! Timestomping has NEVER been easier to detect! You will see...plan as day...when the chronological data has been manipulated since they $SI and FN attributes will be different! Provided you search by keyword, they will appear literally right on top of each other! Very nice addition!

I almost wish it were harder than that to create super timelines, but it's really not. Kristinn has done a fantastic job on the latest release of Log2Timeline. There numerous other options the tool can use, and for the sake of brevity (not to mention the fact that you can read) I have not covered all potential option combinations. My advice is to take some time and play with the tool. Get to know how it works, what the output looks like, and what commands you think are the most relevant for the timelines you are creating.

Serious props to Kristinn for making this extremely useful and powerful tool free to the forensic community. He has done an outstanding job, and honestly, like Harlan's Reg Ripper, and Mandiant's Memoryze, this is a game changer.

Happy Hunting!

Wednesday, July 6, 2011

MBR Analysis

A few weeks ago, Harlan touched on the concept of analyzing the Master Boot Record (MBR or $BOOT) for signs of malware infestation. That got me to thinking, "what would that really look like"? So, I tested it and thought I would share my results.

To recap Harlan's post, basically the MBR contains the partition tables for a Windows system. On a typical NTFS host , the offset for the primary partition table that contains the operating system is 0x63. This may vary based on the type of system or the configuration, but generally speaking, this is pretty consistent. An easy way to check an image for the offset values is the The Sleuth Kit's tool, "mmls". By running mmls against an image, you will see the offset values for the partition tables.

Now, how malware comes into play here, is very interesting, and very clever. Let's take a "typical" Winodws NTFS system and assume that the OS partition is located where we would expect to see it, at offset 0x63. But what if there was a partition table set at offset 0x62? Would you even recognize it, or if you did, would you even care? It's not offset 0x63 right, and when you mount offset 0x63 you see the NTFS file system...plain as day...so no harm no foul, right? Wrong, and here's why.

The malware creates a partition table at offset 0x62 and copies the MBR, with a jump statement. The OS boots and see the MBR in offset 0x62 FIRST. It reads the data and if malware is present executes it. It then follows the jump command to offset 0x63, the NFTS file system is recognized, and normal the normal boot process resumes. When the malware runs on the infected system, the traces are NOT in the primary file system, because they are stored in another partition table! Pretty slick!

After some digging around, I found a pretty nice perl script called, MBRparser by Gary Kessler. It's easy to use and shows you exactly what you would need to see when looking for MBR infections. In the screenshot below, I used Gary's tool to parse the MBR from my local Windows 7 Dell laptop.



As you can see, since I have a typical NFTS file system, my first partition table is set to 0x63, exactly what I would expect to see. What I would NOT expect to see, is a entry prior to offset 0x63. If I exported the MBR (again, $BOOT) from a target system and parsed it with MBRparser, and I saw a partition table prior to 0x63, I would immediately become suspicious.

Now, don't think that every time you have a partition table before the NTFS file system that you have MBR malware. There are systems that intentionally put partitions with vendor tools, or other data there intentionally. So, "Don't Panic"...at least not yet. If you see something there before the NTFS file system you can either mount it with a tool like ImDisk, or FTK Imager, or you can extract the data using The Sleuth Kit's, "blkls". Then you can see the data and decide for yourself if it's just benign vendor stuff, or if it's, malware.

The real takeaway here is to actually start looking. By adding this step to your malware detection methodology, you will increase your chances to catch an infection of this nature. And, since you were likely not doing this in the first place, you have made yourself an exponentially better investigator.


Link