Notifiable Data Breach Statistics

If you’ve been working in Digital Forensics or Incident Response in Australia then you should be aware of the new legislation relating to notifiable data breaches by the Office of the Australian Information Commissioner (OAIC).

In amongst the OAIC’s website, there is some very useful information for incident responders as well as companies who are unsure as to whether they need to disclose when they’ve have had a data breach. You may already work for a mature organisation that has had appropriate legal and technical council in relation to this but if it’s all new to you then I suggest now is a very good time to start reading.

The OAIC outlines a data breach as so:

A data breach occurs when personal information that an entity holds is subject to unauthorised access or disclosure, or is lost.

Personal information is information about an identified individual, or an individual who is reasonably identifiable.[1] Entities should be aware that information that is not about an individual on its own can become personal information when it is combined with other information, if this combination results in an individual becoming ‘reasonably identifiable’ as a result.

A data breach may be caused by malicious action (by an external or insider party), human error, or a failure in information handling or security systems.

Examples of data breaches include:

  • loss or theft of physical devices (such as laptops and storage devices) or paper records that contain personal information
  • unauthorised access to personal information by an employee
  • inadvertent disclosure of personal information due to ‘human error’, for example an email sent to the wrong person
  • disclosure of an individual’s personal information to a scammer, as a result of inadequate identity verification procedures.

The OAIC has recently released quarterly statistics and there are some interesting points which have come out of this.

1: The highest number of individuals affected were 1,000 or fewer across 107 different breaches.

2: It would seem that healthcare organisations are still top of the list for all sorts of targetted attacks in Australia. This comes as no surprise and a statistic that is common in most developed countries.

Investment in security and infrastructure are clearly still lacking in this area since the WannaCry outbreak hit so many systems worldwide.

3: The number of breaches being reported has gradually increased since the start of 2018. Presumably, this will continue to increase.

4: The largest amount of data loss relates to contact information such as names, addresses, email addresses. This is closely followed by financial details. This reflects the businesses which are predominantly targeted (health and finance).

This is also a clear indicator that the low hanging fruit is going to be the most leveraged by attackers. It should come as no surprise that we should expect to see just as many, if not a marked increase in targetted phishing campaigns.

5: Human error still accounts for 36% of data breaches, this indicates there is still a major gap in staff awareness across all industries. Interestingly, the most accidental disclosures happened by PI being sent to the wrong email address but the largest amount of affected individuals was due to loss of paperwork or storage device.

6: 59% of breaches were due to malicious or criminal attacks. Again, this clearly shows there needs to be further investment in education and security.

The highest of these types of malicious or criminal breaches were classed as Cyber Incidents and the breakdown of this can be seen as such:

Compromised credentials being the highest at 34%
Phishing at 29%

I think we know where this is going… Investment in education, training and security.

Some final observations:

  • Questions from clients and their insurers are gravitating more so than ever around what went out the barn door, rather than what led to the door being opened.
  • Phishing is still a huge problem for all industries.
  • Ransomware is largely going unreported. I suspect this may be due to the assumption by many IT vendors when responding is that all ransomware outbreaks are ‘smash and grab’ attempts.
  • Understanding how credentials are compromised and how they are being compromised is still largely unknown in most industries.
  • Reporting is on the increase, this can only be a good thing for the general populous

For further reading:

https://www.oaic.gov.au/resources/privacy-law/privacy-act/notifiable-data-breaches-scheme/quarterly-statistics/notifiable-data-breaches-quarterly-statistics-report-1-april-30-june-2018.pdf

AWS for Forensics (4)

Cloud analysis of local evidence sources

One of the main benefits of analysing evidence in AWS is that we can spin up instances with vast amounts of processing power without too much trouble or cost (in the short term). This can greatly decrease processing time of evidence items and is really useful when you need to determine answers quickly.

A few things that may stand in your way:

  • ISP Bandwidth limitations for evidence uploads
  • Legal or contractual issues around moving evidence into the cloud
  • The sheer volume of evidence for upload

Upload of a local .E01

Presumably, you’ve imaged and hashed your evidence, made sure it’s not encrypted with FileVault or BitLocker and you’re ready to dig in. We can now upload some evidence to our AWS storage volume for analysis.

There are many SSH clients out there which you can use for transferring evidence to AWS, PuTTy being the most commonly used. As I’m an unwashed Mac user, my personal favourite is Cyberduck.

Screen Shot 2018-06-29 at 9.26.50 am

Once we have connected through SSH we can upload our evidence and as you can see this 7GB image is going to take around 30 minutes over a business fibre broadband connection.

Screen Shot 2018-06-29 at 9.36.12 am

Once our evidence is uploaded we, of course, want to hash it again for integrity checks and for continuity.

ubuntu@ip-17x-x1-x-x9:/mnt/evidence$ md5sum Evidence101.E01

 

We can now get into the fun stuff. 🙂

Screen Shot 2018-05-29 at 9.06.36 pm

Tools like Log2Timeline will work with as many threads as you can allocate and this is where your more beefy instances with more CPU grunt will really shine.

Looking at the Windows options for online analysis, there is obviously a vast swathe of forensic tools which can be run in Windows too. Although, the issue with some Windows commercial forensic packages is their reliance on hardware dongles. No matter, there’s a solution for that too, which involves setting up a local USB server to share the dongle to a remote host.

Enter VirtualHere.

VirtualHere is also a great way to share dongles across systems in your local network too.

So now we have some processed output, we can either analyse this in place using tools installed in our cloud instance or pull it down using our SSH client to our local system.

Next up, I’ll cover the slightly more complex issue of capturing evidence from an AWS instance.

Reference:

https://www.virtualhere.com/

https://cyberduck.io/

https://putty.org/

https://www.forensicswiki.org/wiki/Encase_image_file_format

https://bsmuir.kinja.com/building-a-licence-dongle-server-with-a-raspberry-pi-1678930193

AWS for Forensics (3)

Connecting to an instance and attaching volumes

Connecting to your instance

By now we should have at least one analysis system in our AWS platform for capturing evidence. We will now need to connect through to this system using our key files so we can configure extra storage and with some analysis tools.

If you haven’t already configured key pairs yet you can access these from the EC2 ‘Network and Security’ menu. It goes without saying that once created you should guard your key file with your life.

Once we have our SSL key file, we can follow the standard AWS process for connecting to our instance.

For Linux instances via SSH:

Screen Shot 2018-06-25 at 12.58.47 pm

Or for Windows instances via RDP:

Screen Shot 2018-06-25 at 1.16.50 pm

Attaching Volumes

As previously discussed, we can attach storage volumes for evidence while configuring our instance or create these after the fact through the ‘Volumes’ menu in EC2. We can also use this method to attach a snapshot as a volume to our analysis instance for imaging.

Screen Shot 2018-05-28 at 8.09.43 pm

As with creating new instances, you will want to record the volume name and assign it some tags for tracking storage volumes. You may also wish to use this feature for evidence naming later on when you are acquiring evidence from AWS.

Screen Shot 2018-05-29 at 11.32.36 am

Once you have your volume created you can choose the instance for which to attach it. This is done by selecting the ‘Actions’ button from the ‘Instances’ menu in EC2.

It should be noted that physical device names will need to be defined based on whether your systems are Windows or Linux.

Screen Shot 2018-05-29 at 12.14.42 pm

Our block device should now be attached to our Ubuntu instance now and we can query that by listing block devices in our instance.

Screen Shot 2018-05-29 at 12.25.59 pm

As our instances are virtual in AWS the Xen storage device format is used (this caught me out the first time around). Our last task will be to create a file system on our block device.

Screen Shot 2018-05-29 at 12.29.29 pm

Once we have our new evidence storage volume attached we can create a mount point and mount our block device.

Screen Shot 2018-06-27 at 1.44.21 pm

That’s about it, we now have an Ubuntu instance configured with a secondary EXT4 storage volume which can be used as a target disk for forensic images or for storing other files/forensic tools for analysis.

Next up, taking snapshots of existing volumes attaching AWS volumes for forensic imaging.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

https://tools.ietf.org/html/rfc1421

https://tools.ietf.org/html/rfc1424

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

https://askubuntu.com/questions/166083/what-is-the-dev-xvda1-device#166087

AWS for Forensics (2)

My last blog post covered some initial things you might run into when choosing to setup analysis systems in the AWS cloud infrastructure.

Since that last writing, I have had some new experiences in capturing evidence across multiple regions and availability zones in AWS, which has certainly given me some new insight which I can share and hopefully be of some use.

Launching an Instance

In AWS EC2 an instance is just a virtual machine, there are a number of host operating systems and configurations available depending on your use requirements. For the purposes of performing evidence collection or analysis in AWS, you should at least have a system with enough storage attached to capture your evidence. Again, the requirements for your evidence collection may change depending on the type of case you are working on, the locations of these evidence sources and other factors, such as time.

To spin up a system, it is as easy as logging into the EC2 console and choosing the instances menu and your AMI image which is a system image with varying host Operating Systems.

Screen Shot 2018-06-23 at 8.41.33 am

It’s worth noting at this point that there is the option to select “Free tier only”. If you, like me, are a skinflint, this is the one for you.

Screen Shot 2018-06-23 at 8.43.21 am

Once you have decided on your AMI, this is when you can choose the hardware configuration of your system.

Try not to get too carried away here…

Screen Shot 2018-06-23 at 8.49.51 am

The next thing we need to do is configure instance details which relate to Networking, scaling, IAM roles and monitoring.

Screen Shot 2018-06-23 at 8.55.40 am

I’m not going to cover everything on this screen but will drop some links for extra reading.

At a very basic level, the options I would pay attention to are:

1: Shutdown behaviour. If you wish to terminate your instance when it is shut down this will erase storage on any local drives. Probably not best to set this if you’re looking to preserve evidence. Great for an attacker.

2: Enable Termination protection. See above.

3: Cloudwatch Monitoring. Cloudwatch is essentially the equivalent of a console access logging and firewall service. Logs from this are highly valuable when investigating as they can show uploads and access to instances or volumes as well as access to the AWS console itself.

While using AWS for performing any forensic analysis or evidence collection, you should absolutely have this turned on. Auditing and logging of your analysis and processes can be assisted massively with this functionality.

4: Tenancy. The option over tenancy may be something you need to consider for legal parameters as this relates to the shared physical storage devices in the data centre where your instance will reside.

Once we have these configurations, we can move onto adding our storage.

Screen Shot 2018-06-23 at 9.36.37 am

The initial configuration for an Ubuntu AMI in the free tier is 8gb storage. Increasing the boot volume size to at least large enough to store your tools and dependencies is fairly obvious. I’d also attach a volume here to capture evidence to and consider changing the options for “delete on termination”. You can attach and detach volumes later on so if you’re uncertain at this point it is safe to continue without. Encryption of your volume should be fairly self-explanatory.

Creating and adding tags is something I would consider in depth at the earliest outset while scoping your evidence. This is similar to having a naming convention for internal systems in an office or naming evidence exhibits in a case and I found this the best method for maintaining continuity in both of these areas in AWS.

Screen Shot 2018-06-23 at 9.45.05 am

Lastly, we want to lock down access to our instance which is done through “Security Groups”. Security Groups essentially allow us to implement firewall rules to block or allow certain types of traffic via defined IP’s, CIDR blocks, ports and protocols.

Screen Shot 2018-06-23 at 9.50.31 am.png

For my instance, I have configured the security group to only allow access from my own IP at home and at work.

Once we review our instance we can hit launch and within a few minutes we will have our system up and running. Always remember to shut down your instances when not in use, this can become quite a costly affair for the higher spec instances.

Screen Shot 2018-06-23 at 9.56.08 am

Instance State can be changed by right-clicking on the instance. Notice the ‘Name’ column, this is a defined tag in AWS and how I prefer to specify evidence names and analysis systems.

 

So that’s about it for spinning up your first AWS instance, next I’ll cover how to connect into your instances using different methods.

Reference:

https://aws.amazon.com/autoscaling/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html?icmpid=docs_ec2_console

https://aws.amazon.com/documentation/vpc/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html?icmpid=docs_ec2_console

AWS for Forensics (1)

Initial Account Setup

Emerging Cloud technologies like Amazon Web Services (AWS) are going to change the face of traditional forensics forever. Soon, we will rarely get our hands on physical evidence sources attached to systems like this. This leaves a large number of uncertainties surrounding evidence handling, continuity, security and legal permissions both home and overseas.

Hopefully, this series of posts will help navigate through the acquisition and analysis of these sources.

Some quick observations on account creation:

  • I used a protonmail account to sign up for AWS. (It’s quite common for online services to block protonmail accounts for signup nowadays)
  • You must provide a full address and credit card details for account creation. (Amazon will have this on record if you have the legal authority to request when investigating)
  • Amazon uses their own terminology for everything and there’s a bit to learn if you have no previous experience.
  • It looks strangely like the Kindle store.

Anyway… AWS comes in three different flavours as you can see below: (I chose the cheapest Basic Plan, cause I’m a skinflint)

Screen Shot 2018-05-28 at 7.37.31 pm

Elastic Cloud 2 (EC2) is the service I chose once I selected my plan. Although the LightSail service may provide similar capabilities, I’ll just focus on EC2.

Before you go any further, this is a good point to make some initial assessments.

1: Pricing, what is your budget going forward?

I found that information relating to this was difficult to find prior to punching in the credit card details. I have included at the end of this post the pricing (AUD) for various hardware configurations and data throughput costs as of 28 May 2018.

2: What type of evidence are you analysing and how?

This could range from large batches of server logs, another AWS instance, volume or image. Perhaps you have a forensic image that you already acquired and uploaded for analysis. You may be trying to ascertain successful logons to the AWS platform itself.

It is extremely important at this stage to determine your host OS and the physical characteristics of your instance, such as memory, VCPU’s and storage.

Gaining an understanding of your evidence sources, analysis tools, the load on system resources and the potential size of any exports will help you figure out your AWS configuration.

3: Where is your target evidence located?

Amazon splits web services into international regions and if you wish to analyse a snapshot of another AWS instance, you will need to ensure your analysis instance is hosted within that same region.

I’m not sure of the cost or time implications of transfer between regions but I have been advised this does involve a physical move of data across regions to different data centres.

4: How do you secure the environment and, in turn, your evidence?

Amazon has a number of measures in place to ensure the environment is secure. This includes sys logging for your instances, advanced monitoring (a pay for service) and EC2 key pairs where the .pem private key file can be used for SSH connectivity.

There is also multi-factor authentication through Google authenticator or similar mobile applications and the ability to lock down access to your instance by specific IP addresses and ports. These are just a few suggestions, there are many other factors you may need to consider along the way.

If at this juncture, you are wondering why on earth would I use a system in AWS to perform forensics then I’ll direct you to the general purpose pricing models below where you can check some of the tech specs of the servers!

Next up… I’ll cover how to launch an AWS instance.

AWS Pricing

General Purpose – Current Generation

 vCPU  ECU  Memory (GiB)  Instance Storage (GB)  Linux/UNIX Usage

t2.nano

1

Variable

0.5

EBS Only

$0.0073 per Hour

t2.micro

1

Variable

1

EBS Only

$0.0146 per Hour

t2.small

1

Variable

2

EBS Only

$0.0292 per Hour

t2.medium

2

Variable

4

EBS Only

$0.0584 per Hour

t2.large

2

Variable

8

EBS Only

$0.1168 per Hour

t2.xlarge

4

Variable

16

EBS Only

$0.2336 per Hour

t2.2xlarge

8

Variable

32

EBS Only

$0.4672 per Hour

m5.large

2

10

8

EBS Only

$0.12 per Hour

m5.xlarge

4

15

16

EBS Only

$0.24 per Hour

m5.2xlarge

8

31

32

EBS Only

$0.48 per Hour

m5.4xlarge

16

61

64

EBS Only

$0.96 per Hour

m5.12xlarge

48

173

192

EBS Only

$2.88 per Hour

m5.24xlarge

96

345

384

EBS Only

$5.76 per Hour

m4.large

2

6.5

8

EBS Only

$0.125 per Hour

m4.xlarge

4

13

16

EBS Only

$0.25 per Hour

m4.2xlarge

8

26

32

EBS Only

$0.5 per Hour

m4.4xlarge

16

53.5

64

EBS Only

$1 per Hour

m4.10xlarge

40

124.5

160

EBS Only

$2.5 per Hour

m4.16xlarge

64

188

256

EBS Only

$4 per Hour

Data Transfer IN To Amazon EC2 From

Pricing

Internet

$0.000 per GB

Another AWS Region (from any AWS Service)

$0.000 per GB

Amazon S3, Amazon Glacier, Amazon DynamoDB, Amazon SES, Amazon SQS, or Amazon SimpleDB in the same AWS Region

$0.000 per GB

Amazon EC2, Amazon RDS, Amazon Redshift and Amazon ElastiCache instances or Elastic Network Interfaces in the same Availability Zone

Using a private IPv4 address

$0.000 per GB

Using a public or Elastic IPv4 address

$0.010 per GB

Using an IPv6 address within the same VPC

$0.000 per GB

Using an IPv6 address from a different VPC

$0.010 per GB

Amazon EC2, Amazon RDS, Amazon Redshift and Amazon ElastiCache instances or Elastic Network Interfaces in another Availability Zone or peered VPC in the same AWS Region

$0.010 per GB

Data Transfer OUT From Amazon EC2 To Internet

Pricing

First 1 GB / month

$0.000 per GB

Up to 10 TB / month

$0.140 per GB

Next 40 TB / month

$0.135 per GB

Next 100 TB / month

$0.130 per GB

Next 350 TB / month

$0.120 per GB

Next 524 TB / month

Contact Us

Next 4 PB / month

Contact Us

Greater than 5 PB / month

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

https://aws.amazon.com/iam/details/mfa/

FTK Imager and Custom Content Images

FTK Imager

FTK Imager is renowned the world over as the go-to forensic imaging tool. While working in law enforcement I was always obsessed with ensuring I had captured the ‘golden forensic image’ which for obvious reasons, is still ideal and gives you all that unallocated spacey goodness.

But…

Modern day forensics and IR require answers. Quick!

As we all know, things have moved on quite rapidly from grabbing an image of a dead box and leaving it processing in your tool of choice over the weekend. This is mainly due to the issue that most units have; backlogs, lack of time and urgency to produce results. Whether it’s management in Law Enforcement looking for the silver bullet ‘Find Evidence’ button in Axiom (no digs at Magnet but please put that back in :)) or the large corporations incident responder needing to analyse hundreds of endpoints for one specific artefact.

Now, I’m not saying FTK Imager is about to answer either of those questions for you but there are some handy functions which I had never used until recently.

Custom content images in FTK Imager allow the analyst to add an evidence item and build a logical image (AD1… sorry XWF users) containing only files of their choosing.

This can be handy for a few reasons.

Perhaps time to capture evidence is limited.

  • This could involve accessing a users laptop remotely while it is only attached to the network for a short time. This may not be lawfully permitted in your country.
  • You could have been given a computer with no PSU and need to acquire evidence from it before the battery dies (as I once had to do in the back of a $380 taxi journey).
  • In the law enforcement world, there are any other numbers of reasons why you may be tight on time.

You have strict instructions on what to acquire.

  • You might only have legal permission to or have been asked to only extract specific files types.
  • You are capturing evidence from a shared computer and are only allowed to extract files specific to a user account due to legal privilege.

Here are some simple ways around some of these problems using FTK Imager, presuming you are working with Windows computers or existing images.

Custom Content Image by File Type

FTK Imager allows the use of Wild Cards to filter and find specific files stored on the file system. This is a great feature if you are looking for a file by name*, extension or batch of files with similar names.

* Noting that files by name may not meet all matching files in the way that hashing will.

Wild Card Syntax:

? = Replaces any single character in the file name and extension
* = Replaces any series of characters in a file name and extension
| = Separates directories and files

Wild Card Filters:

Users|*|NTUSER.dat
Users|*|Documents|*.doc
Users|*|Downloads|Evidence ?.pdf
Windows|Prefetch|*

1: Start by browsing to your custom content item.

2: Then right-click and select add to “Custom Content Image”.

Add to.png

3: You can manually add custom content by selecting “New” using the wildcard option or “Edit” existing custom content.

Evidence Tree

Wild Card

*As far as I’m aware, there is not an option to save your custom content as a template. 

(Please let me know if you do, as I currently just use a text file as a template for files of interest for varying investigation types)

Creating Content Image by User SID

As previously mentioned, your scope may be limited due to shared computer use and while this may not be of too much importance for law enforcement, files belonging to a user may be marked as privileged by civil court orders.

We can use FTK Imager to create an image of only files owned by a specific users SID, the process is just as we defined previously but upon creating the custom content image you need to select the tick box to “Filter by File Owner”.

Filter.png

Once you have selected this, you will be presented with the respective file owners and their SID’s on the system.

filtebyowner1-e1527244381893.png

Collection for Incident Response

The last instance where these methods may be useful is if you have a handful of workstations where you need to collect some very specific files or artefacts from. I tend to use a text file as a template. Although this works, it is a bit clunky and slow.

There are a great many other commercial and open source tools which can already perform these tasks extremely well, such as f-response, X-Ways Forensic, GRR and so on. This option also doesn’t really work for incident response at scale but if you’re stuck without your commercial tools or have a very targeted approach for collections from a few computers, then this could work for you.

FTK Imager also comes in a lite flavour which doesn’t require any installation. 🙂

Reference:

https://support.microsoft.com/en-au/help/243330/well-known-security-identifiers-in-windows-operating-systems

https://accessdata.com/product-download

https://www.f-response.com/

https://github.com/google/grr

A Few Interesting iOS Forensic Artefacts

As of 2017, there were over 2.8 Million apps available on the Apple app store and close to that number on the Google play store. This is an uncanny number of applications, most of which are unlikely to ever be parsed and decoded by existing forensic tools.

How do we fill that gap?

As I’m sure we are all aware, common mobile forensic tools tend to parse applications pretty well but as with any forensic tool, these should always be validated. Tools are from time to time known to get things wrong or miss things.

At the start of our case, in order to not get bogged down, we should carefully consider the questions we aim to answer. Also, if the method of dumping a handset and relying on decoded data from our tool is going to answer these questions.

I thought I would try to identify some artefacts which exist in common apps which might help answer some attribution questions an analyst may be faced with.

The age-old question of ‘putting bums on seats’ or in our instance, ‘hands on the mobile device’ is often the aim for prosecution cases. Digging through some common application plist and database files should hopefully help show some of this type of activity.

Let’s take a look…

#Instagood

com.burbn.instagram.plist

Screen Shot 2018-05-10 at 9.47.08 am

As you can see we have a timestamp which shows the Instagram swipe down or page reload function.

Great for proving your kids were on Instagram when they said they were asleep.

#Earworms

ShazamDataModel.sqlite

Shazam provides users with the ability to identify music while on the go, great for those wriggly little earworms. Although, what you may not know is that Shazam is recording your location and times every time you Shazam a track. The format of this database table doesn’t appear to have changed over the last two years.

Highly useful for showing user interaction and bad taste in music…

screen-shot-2018-05-12-at-12-42-16-pm-e1526093015189.png

#SIMS

CellularUsage.db

Apple does quite a good job of preserving the list of SIM card ICCID’s which have been present in an iPhone and sometimes you may also see the phone number (MSISDN) which has been saved on that SIM card. This varies between providers and may not always be present.

The field storing the phone number (MSISDN) on a SIM is editable so not 100% reliable, although I have never seen it being modified in recent years.

screen-shot-2018-05-12-at-11-58-54-am.png

Last Known ICCID and telephone number can also be seen in the com.apple.commcenter.plist file.

#locationlocationlocation

CallHistory.Storedata

Perhaps not all that useful for cases where your device has always been in the same country but if you’re investigating someone who tends to travel, it would be nice to know where they were when they calls were made. Again, with a simple SQL query, you can pull out this information.

screen-shot-2018-05-12-at-10-49-23-pm.png

#layouts

iconstate.plist

If you are interested in identifying how the UI looked then you will notice that you can identify the custom strings, including emojis as headers for the groups of app icons on iOS. In our iconstate.plist file these can be read from left to right, top to bottom to visualise the layout of the apps and second panes of apps are nested in another array, as can be seen in the example below.

iconstate

iconstate2

#podcasts

MTLibrary.sqlite

According to FastCompany “In March 2018, Apple Podcasts passed 50 billion all-time episode downloads and streams.” Thankfully for us, Podcast episodes are all tracked in a database stored on the iPhone and there’s useful information in here about when the podcast was downloaded to the device and also when it was last played.

Showing when a podcast was last played could go some way to corroborating whether or not someone was listening to their headphones when they should have been listening to their college lecturer.

Screen Shot 2018-05-14 at 2.53.21 pm

If you don’t listen to the Wired UK Podcasts, then it’s definitely worth a pop.

Hopefully, this quick write up on some iOS artefacts will help someone out. I believe these may be of some assistance along the way to attributing usage and location of devices at very specific times. I’m sure the commercial tools out there already extract some of this information but an analyst should be able to parse this information themselves.  Identifying app data that a tool may have missed and ensuring your evidence is verified as accurate should be paramount.