Category: Uncategorized

Year End Review: Automation with a Bug Bounty Pipeline

Year End Review: Automation with a Bug Bounty Pipeline

Bug Bounty and Vulnerability Disclosure Programs are growing at an alarming rate. At the end of 2020, I was monitoring over 800 companies across 3+ million domains on approximately half a million IPs. All of this data continues to be frequently updated as companies change their scope and assets. A pipeline provides passive income, while allowing for me to spend time working on other interesting projects and bugs.

Bug Bounty programs (BBPs) are companies that agree to pay researchers/testers for disclosed vulnerabilities. On the other hand Vulnerability Disclosure Programs (VDPs) publicly state that they will accept bugs through a communication channel, but do not provide compensation. VDPs will sometimes give out swag or place researchers on a hall-of-fame list. In the bug bounty community, there are strong feelings on which types of programs researchers should spend their time on. In general VDPs will have a less-hardened attack surface compared to BBPs due to the compensation. VDPs will generally be more secure than companies not accepting vulnerabilities from security researchers.

The first step in aggregating bug bounty data is determining what programs to hack on. From there, the program scopes need to be frequently retrieved in a reliable fashion. Researchers need to determine if they will test on BBPs or VDPs and if there are certain industries they want to opt out of, such as blockchain-contracts.

Where Do I Find Companies Accepting Vulnerabilities?

Various companies that are looking for vulnerabilities can be found on platforms like HackerOne, Bugcrowd, Intigriti, YesWeHack, and through sources such as disclose.io. Invite-only platforms exist as well, but have various requirements that may or may not play well with an automation pipeline.

An example of Spotify’s Bug Bounty scope can be seen with item’s such as *.spotify.com and *.spotifyforbrands.com.

Scraping Scopes

Bug bounty platforms provide a central repository for researchers to identify what companies are accepting vulnerabilities. They require companies fill out their profile page with rules and scope in a semi-consistent fashion. These profiles on a common platform allow for scraping. Some of them allow for unauthenticated APIs to be used, but there isn’t a great way to pull private program information without better APIs from the platforms.

One attempt is to use a tool such at https://github.com/sw33tLie/bbscope, which requires the cookies for each of your HackerOne, Bugcrowd, and Intigriti sessions and will then try to parse out the scope on each program.

I wrote my own solution a few years ago that grabs all programs on each platform and tries to parse out the scope from what each company wrote. My solution is very ugly and requires consistent refinement, but it works.

A lazy solution could be to just download a list of subdomains from public sources such as ProjectDiscovery’s Chaos project.

It’s important to pull this data on a recurring basis. This will allow you to obtain new companies that can be tested on. It will ensure that you have coverage for new domains that companies add to their platform profile and can be used to remove items from scope when they are no longer applicable to a company. As a side note, it’s a good idea to grab each company’s status. If they are not currently accepting vulnerabilities, then there is no reason to spend the compute time or energy gathering data.

Once you get the scope and any other bits of metadata you wish to store you can start to filter and perform recon on a company.

Recon

Automated recon has boomed over the last few years. There are new scripts and tools being added every month that are worth testing out to see if they fit into your bug bounty pipeline. It’s overwhelming to look at complex flow charts that have been built out by some researchers and determine where to get started. Test out some tools and find what works for you. Those tools and components can always be changed as your methodology matures.

I start my reconnaissance by performing subdomain enumeration. This means I take companies with wildcard scopes and try to find all related subdomains.

I have found good results from using tools such as Amass, Subfinder, Sublist3r, and ProjectDiscovery’s Chaos. Many of these tools aggregate public and commercial APIs that pull out subdomains for a given domain.

Some researchers will perform DNS bruteforcing to identify additional subdomains using a list like Jason Haddix’s list in SecLists. I personally don’t perform DNS bruteforcing, but it’s a good candidate for improving a pipeline.

After identifying a large list of subdomains to test, that data should be filtered to only what is relevant. Any filtering that can be done upfront will save hours of time in the future. Running large lists of domains through scanners and tooling will greatly slow down and break pipelines. A good start is to check what is online. This status may be defined by DNS resolution or by the availability of some network service such as HTTP.

It’s worth determining if the metadata, IP and relevant ports, are valuable to keep in your inventory or if your pipeline should retrieve fresh data consistently. My preference is to store that data and periodically check stale records to see if they are still accurate. It’s excellent to be able to automatically query network data and associations when writing test cases.

If maintaining an inventory of IPs and ports is of interest, then DNS resolutions and network scans are a large portion of ongoing recon. Massdns is a frequent suggestion for checking to see if many domains are online. It requires a list of DNS resolvers to be updated regularly. Nmap is the most famous network scanner, however masscan, naabu, and rustscan offer faster results with reduced coverage/detection. It also depends what is of interest. There are 65535 TCP ports that can be scanned, which can take a significant amount of time. It may be valuable to scan some UDP ports as well. Network scanning can provide valuable information such as what type of software is running on a given port and can even be configured to run vulnerability scans against that service.

The gathered IPs can be analyzed with services such as Shodan to perform passive network scanning on your behalf. Additional metadata can be grabbed from these services such as the ISP and if it’s hosted on the cloud. The downside is that the rate limit for many of these services is slow and checking hundreds of thousands of IPs at a time can be a bottleneck.

You may decide to filter out subdomains and domains that are offline. This will certainly save space and time as you recheck this data, however it can be useful to keep around. Unresolved subdomains can be used for virtual hosts fuzzing and easy proof of concepts for Server-Side Request Forgery (SSRF) vulnerabilities that allow you to request a company’s internal content.

At minimum, common web services should be identified and tested in a bug bounty pipeline. Port 80 is commonly used for insecure traffic (http), while port 443 is used for TLS traffic (https). An extended number of ports such as 3000, 3001, 3002, 8000, 8080, and 8443 may be commonly seen as well. I highly recommend using httprobe to identify what domains are online and if they are accessible through https, http, or both.

Some other items that may be interesting or relevant to grab:

  • Screenshots
  • Wappalyzer Tags
  • DNS CNAMES

Storing and Managing the Data

The scope from the platforms and the reconnaissance data can become quite large after some time. In an automated system, the data needs to be stored and processed automatically. It needs to be frequently queried and updated. The ideal scenario would be to use an API to manage this entirely or certain components.

There are a few main tables that need to store the appropriate data. I have a Company, Site, IP, and Vulnerability table in my database. I created a join table to map IPs to sites and vise-versa, which allows me to be very efficient in translating this data. Some companies list out their public IP range as part of their scope, so another possibility would be to link IPs to companies. At the core of it, a simple database is required with a lot of data.

Any framework or language could be used to create this central database. The bulk of the effort is in the endpoints that process the data and requests. Some questions to ask are:

  • How do I want to interact with a company’s data?
  • Do I need aggregate or individual results?
  • How do I handle large HTTP responses?
  • How much metadata do I intend to store on each table?

A secondary consideration is how to trigger events and queue jobs. Cron works great to schedule time-based tasks. An example would be to fetch the scopes of all bug bounty companies at 5 PM daily and send any new data to your reconnaissance suite. Certain jobs such as importing hundreds of thousands of records from sites like Yahoo may take up all of your APIs CPU. You may want to consider storing those and processing them in batches.

Some bug bounty hunters will store this data in folders on a filesystem and stitch everything together with bash scripts. I prefer using an API as it provides more granularity on how I want to shape the data, it allows me to stay organized and consistent across companies, and it can easily be deployed to different systems.

The Fun Stuff – Finding Vulnerabilities

At this point in the journey there are some systems set up to continuously grab data and start working on it. That data is stored and can now be queried based on how many attributes you have stored. This leads to a lot of exciting potentials.

As part of a MVS (minimum viable scanner) the bug bounty pipeline needs to be able to pull a subset of the data it has collected and start to scan or fuzz it for vulnerabilities and then report back positive results. It would be possible to auto-report these issues to companies, however I discourage doing this as scanners can have false positives. Results should always be manually reviewed/exploited.

A strong baseline would be to implement functionality to run ProjectDiscovery’s Nuclei scanner on all of your domains on a rolling basis. This means that once it runs through your list, it starts over again. The scanner and templates are continuously updated by the community, which takes the work out of writing test cases for CVEs and common misconfigurations.

If you have read my Metasploit’s RPC API article, then another option could be to attempt to automate the community version of Metasploit against your targets. Metasploit provides the check command on a large number of modules that have a default port associated with them. Metasploit is regularly updated by Rapid7 and is another great way of attempting to automate without recreating the vulnerability signatures manually. A successful vulnerability will likely give you a shell, which will likely be a critical severity vulnerability.

Some other options would be to write your own modules on a regular basis or run other people’s scripts that can be incorporated into the pipeline. They can be efficiently tested by querying applicable network services or web application technologies instead of scanning all assets for a specific vulnerability.

Once a scanner has identified an issue it needs to report back to the central database. It’s great to aggregate the data in one place, but with fast-paced 0-days you need to know within seconds of identifying the vulnerability if you want to be first to report a bug. A notification system is a good idea to have in your pipeline that can be configured to get your attention depending on a variety of factors such as severity and confidence. Slack and Telegram provide free methods of sending notifications. AWS and Twilio can be used to send SMS messages. There are a lot of free and paid products that can be used to send events for a variety of events in your pipeline.

Building the Infrastructure

A large part of bug bounty hunting is to bootstrap a bunch of technologies together to achieve automation. Scripts have to be modular enough for you to be able to swap out tools and components. Some pieces in the pipeline are essential and are unlikely to be disrupted, however the code that glues it all together should allow for an easy upgrade.

Most of what I discussed in this article can be ran for $5-10 in the cloud each month, which is $60-120 a year. That is cheaper than most security tools and it can be used to fund itself through earned bounties.

I’m a huge fan of Axiom, which allows you to create a bug hunting image on DigitalOcean, AWS, etc and spin up a new instance via command line in a matter of seconds. The base instances that cloud providers release are generally sufficient for any type of scanning and are fairly cheap. Axiom wraps the infrastructure code into a bundle of command line tools that allow for IP rotation, distributed scanning, and most importantly pay-for-what-you-use tooling. Customizing the base Axiom images is fairly easy and provides a great starting point.

Automated tooling like this allows for a researcher to spin up an instance or several for a few hours to run through a test suite and then delete all of the instances to prevent additional costs. It ensures that they servers are using the latest copy and that there isn’t any remnant data that might cause problems.

I like use a queue to track the state of my scanners. As I said at the beginning of this article, I have a few million domains that I’m tracking. On a single instance, I likely can get through a few thousand scans in a couple of hours. Using software such as Redis, I can load all of my data into a job-specific queue and parse it with any programming language. I can pop the appropriate jobs from the queue for a given time-frame and then execute my tests. When the queue is empty, I can move on to the next test or decide to replenish the queue with fresh data from my database.

When deciding on an infrastructure, spend the time to play around with the technology until you feel comfortable bootstrapping with it. Ensure that there is enough community support to incorporate software into your stack because you will run into problems.

A Million Forks in the Road

Bug Bounty pipelines are necessary to bug hunters that are looking to test against a breadth of companies. They can range from simple bash scripts to entire networks of bots and micro-services. Pipelines allow for regression and excellent methods of staying organized. They can easily surpass what any person can manually accomplish, yet they will struggle on certain types of bug classes that can’t be easily automated. In it’s first year, my pipeline has managed to pay for itself for the next 20+ years.

There are a handful of improvements that can be made to cover various technical domains and techniques in my pipeline. Each person gets to choose how they want to build their pipeline and what they want it to focus on. It’s easy to extend tables and increase data sources. Automatically ingesting new CVEs and vulnerabilities from the community is powerful and requires minimal effort. When you have started building or planning your pipeline, I encourage you to ensure that the code you write is modular, reinvent the wheel as little as possible, and iterate consistently.

Hit me up on Twitter @wdahlenb with stories about your Bug Bounty pipeline.

Automated Command Execution via Metasploit’s RPC API

Automated Command Execution via Metasploit’s RPC API

Recently I purchased the Black Hat Go book from No Starch Press. The book has a pretty good overview of using Go for offensive security minded people. In Chapter 3 the book has a section on creating a client for Metasploit’s RPC API. The final code is publicly available on the book’s GitHub repo. Download it to follow along.

Rapid7 provides the documentation for Metasploit’s RPC API here: https://metasploit.help.rapid7.com/docs/rpc-api. All of the API calls that are implemented can be found here: https://metasploit.help.rapid7.com/docs/standard-api-methods-reference

First boot up Metasploit and start the RPC server:

msf5 > load msgrpc Pass=password ServerHost=192.168.0.15
[*] MSGRPC Service:  192.168.0.15:55552
[*] MSGRPC Username: msf
[*] MSGRPC Password: password
[*] Successfully loaded plugin: msgrpc

The code from Black Hat go relies on two environment variables to be set, MSFHOST and MSFPASS.

$ export MSFHOST=192.168.0.15:55552
$ export MSFPASS=password

The existing code will print out each session id and some basic information about each session that currently exists in the running metasploit instance. This isn’t too particularly helpful, especially with the availability of the other API calls.

$ go run main.go
Sessions:
    1  SSH test:pass (127.0.0.1:22)

The first useful case would be loading a list of commands to be run on all sessions and returning the output. For this exercise I’ll make use of the session.shell_read and session.shell_write methods to run commands on the SSH session that I have.

The session.shell_write method has the following structure:

Client:

[ "session.shell_write", "<token>", "SessionID", "id\n" ]

Server:

{ "write_count" => "3" }

In the rpc/msf.go file, two structs can be added to handle this data:

type sessionWriteReq struct {
	_msgpack  struct{} `msgpack:",asArray"`
	Method    string
	Token     string
	SessionID uint32
	Command   string
}

type sessionWriteRes struct {
	WriteCount string `msgpack:"write_count"`
}

It’s worth noting that the command needs to have a newline delimiter included in the message. I tested out a few inputs and found that consecutive commands didn’t work. Ex: “id;whoami;hostname”. Only the first command would be run.

The following method can be added to rpc/msf.go to write a command to a particular session:

func (msf *Metasploit) SessionWrite(session uint32, command string) error {
	ctx := &sessionWriteReq{
		Method:    "session.shell_write",
		Token:     msf.token,
		SessionID: session,
		Command:   command,
	}

	var res sessionWriteRes
	if err := msf.send(ctx, &res); err != nil {
		return err
	}

	return nil
}

The function doesn’t return anything other than errors as the write_count isn’t helpful to us. A method call can be added to the client/main.go file to execute commands.

msf.SessionWrite(session.ID, "id\n")

This executes commands, but prevents us from seeing the results. The next step is implementing the session.shell_read method so that we can return the results.

The session.shell_read method has the following structure:

Client:

[ "session.shell_read", "<token>", "SessionID", "ReadPointer ]

Server:

{
"seq" => "32",
"data" => "uid=0(root) gid=0(root)…"
}

Similarly to the write operation, two structs for reading the results can be used:

type sessionReadReq struct {
	_msgpack    struct{} `msgpack:",asArray"`
	Method      string
	Token       string
	SessionID   uint32
	ReadPointer string
}

type sessionReadRes struct {
	Seq  uint32 `msgpack:"seq"`
	Data string `msgpack:"data"`
}

The ReadPointer is interesting as it allows for us to maintain state. Rapid7 encourages this behavior as it allows for collaboration. We will need to determine how to obtain the current ReadPointer before writing data to ensure only my client’s output is returned. For now let’s stick with a value of 0 to ensure we capture all output. Add the following method:

func (msf *Metasploit) SessionRead(session uint32, readPointer uint32) (string, error) {
	ctx := &sessionReadReq{
		Method:      "session.shell_read",
		Token:       msf.token,
		SessionID:   session,
		ReadPointer: string(readPointer),
	}

	var res sessionReadRes
	if err := msf.send(ctx, &res); err != nil {
		return "", err
	}

	return res.Data, nil
}

A small addition to client/main.go can be made to read all data from the session:

data, err := msf.SessionRead(session.ID, 0)
if err != nil {
	log.Panicln(err)
}
fmt.Printf("%s\n", data)

Running the new code gives the following

$ go run main.go
Sessions:
    1  SSH test:pass (127.0.0.1:22)
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
[snip]
test:x:1001:1001:Mista Test,,,:/home/test:/bin/bash
uid=1001(test) gid=1001(test) groups=1001(test)

Woah. I didn’t list out /etc/passwd! Looks like the results are spitting out more than the “id” command that was specified. It’s time to figure out how to get the latest ReadPointer instead of 0.

Digging through the other methods:

The session.ring_last method will return the last issued ReadPointer (sequence number) for the specified Shell session.

https://metasploit.help.rapid7.com/docs/standard-api-methods-reference#section-session-ring-last

Perfect! Let’s add two additional structs to manage the request and response:

type sessionRingLastReq struct {
	_msgpack    struct{} `msgpack:",asArray"`
	Method      string
	Token       string
	SessionID   uint32
}

type sessionRingLastRes struct {
	Seq  uint32 `msgpack:"seq"`
}

The structs should all look very similar since the requests and responses are nearly identical.

First off let’s send a request to connect and get the last sequence number for our ReadPointer. I’ll create a SessionReadPointer method to obtain this value:

func (msf *Metasploit) SessionReadPointer(session uint32) (uint32, error) {
	ctx := &sessionRingLastReq{
		Method:    "session.ring_last",
		Token:     msf.token,
		SessionID: session,
	}

	var sesRingLast sessionRingLastRes
	if err := msf.send(ctx, &sesRingLast); err != nil {
		return 0, err
	}

	return sesRingLast.Seq, nil
}

In my client/main.go code I can add a call to this function prior to writing a command and then update the read call to use this returned value.

readPointer, err := msf.SessionReadPointer(session.ID)
if err != nil {
	log.Panicln(err)
}
msf.SessionWrite(session.ID, "id\n")
data, err := msf.SessionRead(session.ID, readPointer)
if err != nil {
	log.Panicln(err)
}
fmt.Printf("%s\n", data)

I can then go ahead and update the code:

$ go run main.go
Sessions:
    1  SSH test:pass (127.0.0.1:22)
uid=1001(test) gid=1001(test) groups=1001(test)

Awesome. Only the results of the command specified will be printed out. How can I expand on this to automate running scripts on each session?

For this second part I will ingest a file that has as many commands as I wish to run that are separated by new line characters.

An example would be:

whoami
date
id
hostname

I’ll transfer the code from the client/main.go into the rpc/msf.go file to make it reusable:

func (msf *Metasploit) SessionExecute(session uint32, command string) (string, error) {
	readPointer, err := msf.SessionReadPointer(session)
	if err != nil {
		return "", err
	}
	msf.SessionWrite(session, command)
	data, err := msf.SessionRead(session, readPointer)
	if err != nil {
		return "", err
	}
	return data, nil
}

The next step is reading the file into a slice. I went ahead and used the bufio package to scan the file line by line. I added the following underneath the variable declarations to my client/main.go file.

commands := []string{}
if len(os.Args) == 2 {
	file, err := os.Open(os.Args[1])
	if err != nil {
		log.Fatalln(err)
	}
	defer file.Close()

	scanner := bufio.NewScanner(file)
	for scanner.Scan() {
		commands = append(commands, scanner.Text())
	}
}

The contents of the file specified in the first argument will be read into the commands slice. Printing out the contents of commands provides:

[whoami date id hostname]

In the rpc/msf.go file I added a new function to wrap around SessionExecute. The bufio scanner removed the newline character from each line, so this helper method can add it back and reuse the SessionExecute method as many times as needed. The results are returned on an error or once all the commands are done.

func (msf *Metasploit) SessionExecuteList(session uint32, commands []string) (string, error) {
	var results string
	for _, command := range commands {
		tCommand  := fmt.Sprintf("%s\n", command)
		result, err := msf.SessionExecute(session, tCommand)
		if err != nil {
			return results, err
		}
		results += result
	}

	return results, nil
}

Finally within the client/main.go file I added a check to see if the commands variable has any commands to run on each session. If it does, we can call msf.SessionExecuteList and print out the results.

if len(commands) > 0 {
	data, _ := msf.SessionExecuteList(session.ID, commands)
	fmt.Printf("%s", data)
}

Running the code gives the following:

go run main.go commands.txt
Sessions:
    1  SSH test:pass (127.0.0.1:22)
zepher
test
Mon Apr 27 16:20:51 CDT 2020
uid=1001(test) gid=1001(test) groups=1001(test)

The output could be cleaned up a bit especially with multiple sessions. Perhaps the command output along with more of the session metadata could be put into JSON for easy parsing.

The proof of concept is powerful. It allows for command execution in a collaborative environment that scales well. Overall the API provides an opportunity to automate some of the manual tasks that are restricted to msfconsole. I recommend playing around with some of the other API calls and taking a look at Black Hat Go.

The final code can be found on my Github repo: https://github.com/wdahlenburg/msf-rpc-client

Post Exploitation through Git with SSH Keys

Post Exploitation through Git with SSH Keys

Git is a version control system that allows content to be shared and modified. It is popularized by GitHub, however many other companies have their own Git server. These servers can provide a wealth of information during engagements. It can be helpful for research on a company by providing a list of developers that are contributing to a company repository. This information can be put into popular tools such as Gitrob or truffleHog and leak potential secrets that could compromise a company. For companies that run their own GitHub Enterprise server, their rules on passwords in GitHub may be more lax.

There hasn’t been much discussion into post-exploitation through GitHub. This is likely due to many security professionals not knowing how to use git or just running out of time on engagements. Diving into post-exploitation with GitHub is an excellent way to steal private repositories and impact production code.

In a linux environment, SSH keys are normally stored in the ~/.ssh/ folder. They come in public/private key pairs. A developer can use these keys to authenticate to GitHub over SSH. This allows them to read and write content. GitHub allows for repositories of code to be stored with public or private permissions.

GitHub allows many keys to be stored for a user account. This is helpful because it allows users to create different keys on their different computers. For an attacker, the scope is increased every time a new key is added to GitHub. An attacker only has to compromise one of those keys to gain persistent access to GitHub.

A Metasploit auxiliary module was written to quickly enumerate local SSH keys and test their access to GitHub. Other modules can be used to scrape keys from a host. A compromised user will only have access to their keys unless they can make a lateral escalation. A root user that is compromised will have access to all keys on the host. This metasploit module speeds up the checking and can be used with Github and GitLab. It can easily be extended to other servers as well.

Currently available in Metasploit

Once this private key is obtained, it can be used with a simple config file:

$ cat ~/.ssh/config
Host github
HostName github.com
User git
IdentityFile /home/wy/.ssh/id_ed25519

Access can be tested through:

$ssh -T git@github

If successful, the response will tell who the user is.

The first option for post-exploitation is to read private repositories. GitHub does not provide an easy way to view these without a Personal Access Token, so guessing/bruteforcing will have to be done. Other git servers may allow querying to private repos such as Gitolite (https://bytefreaks.net/gnulinux/bash/how-to-list-all-available-repositories-on-a-git-server-via-ssh). In organizations that deploy production creds through layering on private repos, this could be an easy task.

Example:

  • My-repo-1 is the code base
  • My-repo-1-shadow contains the creds that are layered on top of My-repo-1

By appending -shadow to the known public repos, a user with a stolen ssh key could download the sensitive credentials.

If there aren’t any patterns, a suggestion would be to look for references to other projects and create a wordlist. A for loop trying to clone each repo for the user could lead to success.

Reading sensitive repos is a great win, but it limits what can be done. Writing to a repo brings on endless opportunities to backdoor code. How often do developers trust what’s already been committed? Most start their day off doing a git pull on a repo to make sure they are up to date. An attacker could modify code and put a backdoor into the master branch. The next developer that runs it could grant a reverse shell to the attacker. If build processes are weak, this code could be automatically pushed into production.

SSH Keys can provide a method of persistence in an environment. They are commonly used for access to servers, but the extended trust to servers like GitHub allows for an attacker to maintain access in an organization. Revoking the keys from the compromised server will still allow an attacker to use the keys to access GitHub. An attacker that has access to the Git server can add their backdoor and wait until they are let back in.

How can this be avoided?

SSH keys can be configured to use passphrases. Most people don’t use them. It is highly recommended to enforce passphrases on SSH keys.

Passphrases can even be added to existing private keys:

$ ssh-keygen -p -f ~/.ssh/id_rsa

An attacker will have to crack the password on the key to be able to use it. This isn’t a failsafe, but provides some defense in depth for an already compromised host.