Mac OS Daily Logs

Overview

I recently attended the awesome SANS DFIR, Mac and iOS Forensics and Incident Response course with Sarah Edwards. This has obviously given me lots of great inspiration on how to negotiate Mac analysis in general and to take a closer look at some of those system files that we covered in training.

I’ve spent a little bit of time digging through the log files on my MacBook (Mojave 10.14.2). I’m sure this isn’t new to most practised Unix beards but for those who aren’t aware, there’s a really great little log file called daily.out in /var/log. I had previously given little credence to this log but realised it can be used to determine a whole wealth of useful information. I also reviewed the weekly.out and monthly.out files but these were, in my case, far less granular.

At a high level daily.out contains information relating to disk usage and networking, this file is written at least daily and the configurations for all three of the periodic logs are stored in plist files in the following location:

/System/Library/LaunchDaemons/com.apple.periodic-*****.plist

After reviewing the content of this file, it made me consider how this might assist in some of my casework?

Disk Usage

Firstly, I borrowed some grep skills from a very knowledgeable and tall colleague on my team to see if we could parse out just some specific information from the daily.out file. We extracted the lines only containing the dates, followed by the lines which related specifically to disk usage.

grep -E -e "\w{3} \w{3} .\d (\d\d\:){2}" -e "(/dev/disk|Disk status|iused)" daily.out

From this, we were able to find entries dating back as early as 3 months, and that the log contains:

  • Logical volumes mounted at the time entries are written
  • Size of volumes
  • Space used on volumes

As you can imagine, disk volume information will be highly valuable in showing drives or images which were attached when the log was written and especially if you know the volume name used by a device you’re looking to prove access to.

We can also ascertain some other information from this log which is quite valuable.

Bootcamp!

Screen Shot 2018-12-11 at 9.37.25 am

 

You may have an instance where a suspect, subject or general bad person is saying they have never used their Bootcamp install, however, you can see from the Bootcamp disk usage that the volume is being written to and from regularly. Perhaps a big chunk of data has been deleted before a date of interest?

Uptime

Another interesting piece from the daily.out file is that it will show uptime of the system when the log entries are written. This could help prove whether or not the system was switched on and in use over a specific period.

This may also show some interesting information about account usage on the computer. As Mac computers generally tend to be used by individuals, this means there’s usually only ever one account logged on at any time. If you have an experienced user who is elevating to root every day, then seeing multiple accounts logged on may not be uncommon. Although, if an inexperienced user who has no knowledge of the root account, is logged on many times when another account is logged on, it may be suspicious or warrant further analysis.

Again, we extracted the lines from the daily.out file we are interested in using a simple grep command:

grep -E -e "\w{3} \w{3} .\d (\d\d\:){2}" -e "Local system status" -e "load averages" daily.out

As you can see we can pull some interesting information about computer and account usage:

  • Shows uptime of the system at the point in which the daily.out entry is written
  • Also shows the number of users logged on, remember this is usually going to be one

There are also some very useful network interface statistics listed in this file which are probably more relevant to IR investigations but we may look at these another time.

Reference:

https://www.sans.org/cyber-security-summit/archives/file/summit-archive-1493741667.pdf

http://thexlab.com/faqs/maintscripts.html

AWS for Forensics (5)

Capturing Evidence from AWS

We previously discussed how to upload evidence into our AWS environment for analysis. This is something which clearly has benefits due to the ease of spinning up very high spec systems in a matter of minutes. In saying this, there are also inhibiting factors such as bandwidth availability, cost and legal issues which may arise. I don’t wish to cover too many of the technical elements, as there are mountains of information on Amazon’s site. We shall now cover some scenarios where we may need to capture evidence from AWS.

In any AWS evidence collection you are likely to require:

1: Appropriate access to the platform (if it’s not your own), including credentials and audit logging for your access.

2: Analysis systems (scope AWS specifications prior to starting) with a full suite of tools in the AWS environment or your clients.

3: A thorough understanding of the infrastructure, evidence sources, time frames and objectives.

4: Evidence naming convention to apply as tags to volumes or snapshots, which can be used for continuity, chain of custody, and later identifying your evidence volumes.

5: For larger environments, consideration on what you will do with any volumes, snapshots or AMI’s once your collection/analysis phase is complete. eg download or deletion

(At scale, this approach of image preservation will cost a number of bit-schmeckles. An exit strategy which allows for maximum data integrity, evidence continuity and cost-effectiveness will pre-empt any difficult cost implications or conversations for you further down the track)

Sharing Evidence between AWS Accounts

There may be a few reasons why you would want to share a particular snapshot from your target environment. The target environment may be compromised, they may not wish to have you on-site for a number of reasons or you may just want to share snapshots to your own as this is where you have all your beefy instances and tools configured.

To share a snapshot from your target environment you will need to establish which AMI or Snapshot you wish to acquire, create a volume from this and then share the volume to your own platform.

I won’t discuss hashing but it should be self-explanatory that you need to establish a level of certainty about your evidence, particularly when physically moving data through data centres or downloading from the AWS platform.

Once you have the volume moved to your own platform you will need to mount it read-only where you can proceed to interrogate it with forensic tools as you normally would.

You may wish to capture a forensic image of this volume at this time for continuity or for download and offline analysis. Depending on your case requirements and time constraints, you may also wish to interact with it directly in order to reach quick results.

Provided you have taken appropriate measures to capture your snapshots and volumes, interacting directly with the write blocked volume in this way is an acceptable approach for performing Incident Response (in my humble opinion).

That’s how I approached sharing a volume from the target environment to my own for imaging and analysis.

Analysis in your target environment

The other scenario you may face is where your client hosts data in regions which are outside of your legal jurisdiction or have non-disclosure agreements or other contractual obligations which will impact on your ability to physically take evidence away from their environment.

This can drastically impact on your ability to stick to the old dead box approach to capturing evidence where you capture the forensic image and download it for offline analysis. I recently worked on a case where no client data (stored in AWS) was to be accessed or taken outside of the client’s offices.

The approach devised involved setting up analysis workstations in multiple regions all over the world to mount volumes from systems which we suspected to be involved in the incident. We had to access these systems from within our client’s offices and through the process, only collect evidence which contained no client data. Essentially, this led to the bulk of our incident response investigation revolving around event logs, registry hives and file system metadata in order to meet these requirements.

This approach is essentially the same as sharing volumes to your own instance, although there are other implications:

1: Hardware dongles  You may wish to use hardware licensed software. We used Virtualhere to share ours up to AWS and this worked exceptionally well, although there were some firewall and routing issues.

2: IT assistance – Like all cases, administrators of the target infrastructure should be your best friends. Chances are you will need a lot of help from these guys when getting their environment configured to work for you.

3: Time – Depending on the amount of evidence you have to capture, time will definitely be against you. This is quite simply due to the way that AWS manages snapshots and volumes. You need to be very meticulous when creating volumes and documenting your actions.

4: Time – Because we never have enough of it…

So that’s a very high-level view of the approach I’ve used for capturing and analysing evidence in AWS.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

 

 

(Not Quite) Snapchat Forensics

Overview

For those of us who don’t have access to those GrayKey boxes or Cellebrite services to acquire physical images of devices, we are generally reliant upon logical extractions of iOS due to legal limitations or similar. After a recent enquiry relating to snapchat data and what was held on a device, I later found out that Snapchat have a download your data service much like Google Takeout.

I had a look into what data is held online and accessible by the user with their basic login. As it happens, as long as you have permission to access the account online, there’s quite a wealth of metadata available.

Points of Interest from an investigative standpoint

I’ve summarised some of the main points I think are interesting. The majority of these I believe would be more useful in criminal matters such as harassment, stalking, sexual abuse or missing person cases but are certainly interesting when considering the lack of available information from the device.

  • Contains Account Creation dates, devices used with the account
    • Useful if there is a dispute over dates of communication and whether there are further devices involved
  • Snap and chat history (no content and goes back at least as far as 1 month back)
    • Shows communication between specific individuals
  • Lists of friends, friend requests sent, friends blocked, friends deleted  (no timestamps)
    • Any of these could be useful in identifying attempted communication or attempted ceasing communication with an individual.
  • Search history (Including search term and Lat, Long coordinates from where those searches were performed)
    • Could indicate some form intent if searching for a specific individual
  • Frequent locations, locations you have visited and Latest location

To download your Snapchat data

  1. Login using account credentials and browse to: https://accounts.snapchat.com/accounts/downloadmydata
  2. Once you’re in, scroll to the bottom of the page and hit ‘Submit Request’
  3. Wait…
  4. You should receive an email to the account used to set up the Snapchat account containing a hyperlink to the download page.
  5. Alternatively, if you log back in the zip file will be available on the /downloadmydata page.

Once you download the data, it’ll be much like the format of a google takeout download, html and JSON.

and that’s it…

If you’re in a bit of a pickle and lacking device data, downloading account data directly from Snapchat may be a second best alternative.

Notifiable Data Breach Statistics

If you’ve been working in Digital Forensics or Incident Response in Australia then you should be aware of the new legislation relating to notifiable data breaches by the Office of the Australian Information Commissioner (OAIC).

In amongst the OAIC’s website, there is some very useful information for incident responders as well as companies who are unsure as to whether they need to disclose when they’ve have had a data breach. You may already work for a mature organisation that has had appropriate legal and technical council in relation to this but if it’s all new to you then I suggest now is a very good time to start reading.

The OAIC outlines a data breach as so:

A data breach occurs when personal information that an entity holds is subject to unauthorised access or disclosure, or is lost.

Personal information is information about an identified individual, or an individual who is reasonably identifiable.[1] Entities should be aware that information that is not about an individual on its own can become personal information when it is combined with other information, if this combination results in an individual becoming ‘reasonably identifiable’ as a result.

A data breach may be caused by malicious action (by an external or insider party), human error, or a failure in information handling or security systems.

Examples of data breaches include:

  • loss or theft of physical devices (such as laptops and storage devices) or paper records that contain personal information
  • unauthorised access to personal information by an employee
  • inadvertent disclosure of personal information due to ‘human error’, for example an email sent to the wrong person
  • disclosure of an individual’s personal information to a scammer, as a result of inadequate identity verification procedures.

The OAIC has recently released quarterly statistics and there are some interesting points which have come out of this.

1: The highest number of individuals affected were 1,000 or fewer across 107 different breaches.

2: It would seem that healthcare organisations are still top of the list for all sorts of targetted attacks in Australia. This comes as no surprise and a statistic that is common in most developed countries.

Investment in security and infrastructure are clearly still lacking in this area since the WannaCry outbreak hit so many systems worldwide.

3: The number of breaches being reported has gradually increased since the start of 2018. Presumably, this will continue to increase.

4: The largest amount of data loss relates to contact information such as names, addresses, email addresses. This is closely followed by financial details. This reflects the businesses which are predominantly targeted (health and finance).

This is also a clear indicator that the low hanging fruit is going to be the most leveraged by attackers. It should come as no surprise that we should expect to see just as many, if not a marked increase in targetted phishing campaigns.

5: Human error still accounts for 36% of data breaches, this indicates there is still a major gap in staff awareness across all industries. Interestingly, the most accidental disclosures happened by PI being sent to the wrong email address but the largest amount of affected individuals was due to loss of paperwork or storage device.

6: 59% of breaches were due to malicious or criminal attacks. Again, this clearly shows there needs to be further investment in education and security.

The highest of these types of malicious or criminal breaches were classed as Cyber Incidents and the breakdown of this can be seen as such:

Compromised credentials being the highest at 34%
Phishing at 29%

I think we know where this is going… Investment in education, training and security.

Some final observations:

  • Questions from clients and their insurers are gravitating more so than ever around what went out the barn door, rather than what led to the door being opened.
  • Phishing is still a huge problem for all industries.
  • Ransomware is largely going unreported. I suspect this may be due to the assumption by many IT vendors when responding is that all ransomware outbreaks are ‘smash and grab’ attempts.
  • Understanding how credentials are compromised and how they are being compromised is still largely unknown in most industries.
  • Reporting is on the increase, this can only be a good thing for the general populous

For further reading:

https://www.oaic.gov.au/resources/privacy-law/privacy-act/notifiable-data-breaches-scheme/quarterly-statistics/notifiable-data-breaches-quarterly-statistics-report-1-april-30-june-2018.pdf

AWS for Forensics (4)

Cloud analysis of local evidence sources

One of the main benefits of analysing evidence in AWS is that we can spin up instances with vast amounts of processing power without too much trouble or cost (in the short term). This can greatly decrease processing time of evidence items and is really useful when you need to determine answers quickly.

A few things that may stand in your way:

  • ISP Bandwidth limitations for evidence uploads
  • Legal or contractual issues around moving evidence into the cloud
  • The sheer volume of evidence for upload

Upload of a local .E01

Presumably, you’ve imaged and hashed your evidence, made sure it’s not encrypted with FileVault or BitLocker and you’re ready to dig in. We can now upload some evidence to our AWS storage volume for analysis.

There are many SSH clients out there which you can use for transferring evidence to AWS, PuTTy being the most commonly used. As I’m an unwashed Mac user, my personal favourite is Cyberduck.

Screen Shot 2018-06-29 at 9.26.50 am

Once we have connected through SSH we can upload our evidence and as you can see this 7GB image is going to take around 30 minutes over a business fibre broadband connection.

Screen Shot 2018-06-29 at 9.36.12 am

Once our evidence is uploaded we, of course, want to hash it again for integrity checks and for continuity.

ubuntu@ip-17x-x1-x-x9:/mnt/evidence$ md5sum Evidence101.E01

 

We can now get into the fun stuff. 🙂

Screen Shot 2018-05-29 at 9.06.36 pm

Tools like Log2Timeline will work with as many threads as you can allocate and this is where your more beefy instances with more CPU grunt will really shine.

Looking at the Windows options for online analysis, there is obviously a vast swathe of forensic tools which can be run in Windows too. Although, the issue with some Windows commercial forensic packages is their reliance on hardware dongles. No matter, there’s a solution for that too, which involves setting up a local USB server to share the dongle to a remote host.

Enter VirtualHere.

VirtualHere is also a great way to share dongles across systems in your local network too.

So now we have some processed output, we can either analyse this in place using tools installed in our cloud instance or pull it down using our SSH client to our local system.

Next up, I’ll cover the slightly more complex issue of capturing evidence from an AWS instance.

Reference:

https://www.virtualhere.com/

https://cyberduck.io/

https://putty.org/

https://www.forensicswiki.org/wiki/Encase_image_file_format

https://bsmuir.kinja.com/building-a-licence-dongle-server-with-a-raspberry-pi-1678930193

AWS for Forensics (3)

Connecting to an instance and attaching volumes

Connecting to your instance

By now we should have at least one analysis system in our AWS platform for capturing evidence. We will now need to connect through to this system using our key files so we can configure extra storage and with some analysis tools.

If you haven’t already configured key pairs yet you can access these from the EC2 ‘Network and Security’ menu. It goes without saying that once created you should guard your key file with your life.

Once we have our SSL key file, we can follow the standard AWS process for connecting to our instance.

For Linux instances via SSH:

Screen Shot 2018-06-25 at 12.58.47 pm

Or for Windows instances via RDP:

Screen Shot 2018-06-25 at 1.16.50 pm

Attaching Volumes

As previously discussed, we can attach storage volumes for evidence while configuring our instance or create these after the fact through the ‘Volumes’ menu in EC2. We can also use this method to attach a snapshot as a volume to our analysis instance for imaging.

Screen Shot 2018-05-28 at 8.09.43 pm

As with creating new instances, you will want to record the volume name and assign it some tags for tracking storage volumes. You may also wish to use this feature for evidence naming later on when you are acquiring evidence from AWS.

Screen Shot 2018-05-29 at 11.32.36 am

Once you have your volume created you can choose the instance for which to attach it. This is done by selecting the ‘Actions’ button from the ‘Instances’ menu in EC2.

It should be noted that physical device names will need to be defined based on whether your systems are Windows or Linux.

Screen Shot 2018-05-29 at 12.14.42 pm

Our block device should now be attached to our Ubuntu instance now and we can query that by listing block devices in our instance.

Screen Shot 2018-05-29 at 12.25.59 pm

As our instances are virtual in AWS the Xen storage device format is used (this caught me out the first time around). Our last task will be to create a file system on our block device.

Screen Shot 2018-05-29 at 12.29.29 pm

Once we have our new evidence storage volume attached we can create a mount point and mount our block device.

Screen Shot 2018-06-27 at 1.44.21 pm

That’s about it, we now have an Ubuntu instance configured with a secondary EXT4 storage volume which can be used as a target disk for forensic images or for storing other files/forensic tools for analysis.

Next up, taking snapshots of existing volumes attaching AWS volumes for forensic imaging.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

https://tools.ietf.org/html/rfc1421

https://tools.ietf.org/html/rfc1424

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

https://askubuntu.com/questions/166083/what-is-the-dev-xvda1-device#166087

AWS for Forensics (2)

My last blog post covered some initial things you might run into when choosing to setup analysis systems in the AWS cloud infrastructure.

Since that last writing, I have had some new experiences in capturing evidence across multiple regions and availability zones in AWS, which has certainly given me some new insight which I can share and hopefully be of some use.

Launching an Instance

In AWS EC2 an instance is just a virtual machine, there are a number of host operating systems and configurations available depending on your use requirements. For the purposes of performing evidence collection or analysis in AWS, you should at least have a system with enough storage attached to capture your evidence. Again, the requirements for your evidence collection may change depending on the type of case you are working on, the locations of these evidence sources and other factors, such as time.

To spin up a system, it is as easy as logging into the EC2 console and choosing the instances menu and your AMI image which is a system image with varying host Operating Systems.

Screen Shot 2018-06-23 at 8.41.33 am

It’s worth noting at this point that there is the option to select “Free tier only”. If you, like me, are a skinflint, this is the one for you.

Screen Shot 2018-06-23 at 8.43.21 am

Once you have decided on your AMI, this is when you can choose the hardware configuration of your system.

Try not to get too carried away here…

Screen Shot 2018-06-23 at 8.49.51 am

The next thing we need to do is configure instance details which relate to Networking, scaling, IAM roles and monitoring.

Screen Shot 2018-06-23 at 8.55.40 am

I’m not going to cover everything on this screen but will drop some links for extra reading.

At a very basic level, the options I would pay attention to are:

1: Shutdown behaviour. If you wish to terminate your instance when it is shut down this will erase storage on any local drives. Probably not best to set this if you’re looking to preserve evidence. Great for an attacker.

2: Enable Termination protection. See above.

3: Cloudwatch Monitoring. Cloudwatch is essentially the equivalent of a console access logging and firewall service. Logs from this are highly valuable when investigating as they can show uploads and access to instances or volumes as well as access to the AWS console itself.

While using AWS for performing any forensic analysis or evidence collection, you should absolutely have this turned on. Auditing and logging of your analysis and processes can be assisted massively with this functionality.

4: Tenancy. The option over tenancy may be something you need to consider for legal parameters as this relates to the shared physical storage devices in the data centre where your instance will reside.

Once we have these configurations, we can move onto adding our storage.

Screen Shot 2018-06-23 at 9.36.37 am

The initial configuration for an Ubuntu AMI in the free tier is 8gb storage. Increasing the boot volume size to at least large enough to store your tools and dependencies is fairly obvious. I’d also attach a volume here to capture evidence to and consider changing the options for “delete on termination”. You can attach and detach volumes later on so if you’re uncertain at this point it is safe to continue without. Encryption of your volume should be fairly self-explanatory.

Creating and adding tags is something I would consider in depth at the earliest outset while scoping your evidence. This is similar to having a naming convention for internal systems in an office or naming evidence exhibits in a case and I found this the best method for maintaining continuity in both of these areas in AWS.

Screen Shot 2018-06-23 at 9.45.05 am

Lastly, we want to lock down access to our instance which is done through “Security Groups”. Security Groups essentially allow us to implement firewall rules to block or allow certain types of traffic via defined IP’s, CIDR blocks, ports and protocols.

Screen Shot 2018-06-23 at 9.50.31 am.png

For my instance, I have configured the security group to only allow access from my own IP at home and at work.

Once we review our instance we can hit launch and within a few minutes we will have our system up and running. Always remember to shut down your instances when not in use, this can become quite a costly affair for the higher spec instances.

Screen Shot 2018-06-23 at 9.56.08 am

Instance State can be changed by right-clicking on the instance. Notice the ‘Name’ column, this is a defined tag in AWS and how I prefer to specify evidence names and analysis systems.

 

So that’s about it for spinning up your first AWS instance, next I’ll cover how to connect into your instances using different methods.

Reference:

https://aws.amazon.com/autoscaling/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html?icmpid=docs_ec2_console

https://aws.amazon.com/documentation/vpc/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html?icmpid=docs_ec2_console