Sunday, August 15, 2010

The Seven Deadly Sins of Security Vulnerability Reporting

On a previous post, I opened the door for a new blog post with the goal of providing a few specific recommendations (from a security researcher perspective) for organizations, commercial companies, and open-source projects in order to improve the resources and procedures they put in place to be notified and act on security vulnerabilities on their official web site(s), services, or any of their products.

The Seven Deadly Sins of Security Vulnerability Reporting pretends to become an easy to follow list, not very technical but security relevant (so that we can point people to it), for any organization interested on improving the process of dealing with security vulnerabilities reported by external security researchers (or third parties). It is the result of common issues we found when reporting vulnerabilities and findings during penetration tests, security research, and incident handling:
  1. Communication channels
  2. Confidentiality
  3. Availability
  4. ACK
  5. Verification
  6. Interactivity
  7. "Researchability"
I strongly recommend you to go through the list during this Summer, identify what sins you can redeem in your environment, and implement the changes on September! Let's get ready for the new season!

1. Communication channels
Do you have clear and simple communication channels to be notified about security vulnerabilities in your environment and products?

Provide a clear and simple notification channel (or channels) for researchers to submit security vulnerabilities about your environment and products. Two of the most common methods are e-mail (use an easy to remember e-mail address, like [email protected] or [email protected] - very common nowadays) and an easy to remember web page, such as https://www.yourdomain.com/security/. This page might include a web-based contact form, although I personally prefer e-mail, so it can just contain the details to contact with your security team.

2. Confidentiality
Do you have secure communication channels to receive sensitive and/or confidential notifications?

Implement secure communication channels to receive the security-related notifications by using strong encryption. This way, anyone eavesdropping on the communication won't be able to get all the details about the vulnerability.
  • For the web-based option, make use of a trusted HTTPS connection for the web page used for the notifications (Did you see the "https://" above?). Do not use HTTP!
  • For the e-mail option, create and publish the GPG/PGP key associated to the e-mail address used for the notifications. Have the key ready in advance, and make it easily available and verifiable: use your web site plus the public GPG/PGP servers for its publication and distribution.
Always use the secure channel. As obvious as it might sound, for some reason, it is very common for people to end up not encrypting one of the messages at some point during the process of solving the vulnerability and finding the fix, disclosing confidential details as a result.

3. Availability
Are the notifications channels available 24x7, specially, when they are required ;)?

Ensure the web page and e-mail address used for the notifications work as expected and are always available. This is easier said than done. Are you sure someone in the organization is receiving the notifications? Check all the notification methods frequently, or at least, from time to time.

4. ACK (Acknowledgment)
How can the researcher know you have received the notification?

As soon as possible, reply with a quick e-mail message to the researcher confirming you have received the notification, and explain that the notification is going to be reviewed and verified in the next few days. This way the researcher can confirm you got the notification, she knows she can expect a further detailed message and when (try to define what "a few days" means), and you are both in sync.

The initial acknowledgment message (and associated exchange) is a great time for sharing GPG/PGP keys and verifying the secure communication channel works.

If you put in place an automatic response system to confirm the reception of messages on the web or  e-mail notification channels, ensure you also provide a second human-based reply, so you demonstrate there is someone behind it, reading and processing the notifications.

Remember, e-mail is not a 100% reliable notification system (SPAM filters apart ;)), so I recommend you to acknowledge relevant messages during the whole process.

5. Verification
How do you know if the notification is related with a new vulnerability (0-day) or is a well known issue?

Take your time to read and understand in detail the contents of the notification and do not make assumptions till you test the issue in your environment. Clarify with the researcher any unclear points or undefined areas till you are able to reproduce it. Verify and, confirm or deny, if it is something you already knew about or a new vulnerability (0-day).

6. Interactivity
Once you confirm it is a new vulnerability, design a plan to fix it, and keep all parties involved informed about how the plan progresses.

This is all about information exchange, isn't it? Improve the information exchange procedures you put in place during the process of testing and fixing the issue. I highly recommend you to periodically notify the security researcher about how the fix progresses and any other relevant actions you take.  During the process, you can plan and agree on the deadlines for future actions and the final (responsible) disclosure date.

Apart from using periodic status updates, avoid not responding to e-mails. I you do not have an answer yet, or need time to review the data and provide an answer, once again, send a quick message to the other party and set a reasonable timeframe to keep the information exchange flowing.

7. "Researchability"
All the previous sins provided guidance to the organization that has the responsibility to fix the vulnerability, but... what about the security researcher that found it?

I'm sure there are things we (as security researchers) do wrong, so this last sin is reserved for us :) A few common mistakes not to forget are:
- Follow the responsible disclosure principle (it seems nobody really knows what this means, or has its own definition ;) ).
- GPG/PGP does not encrypt the e-mail subject, so provide enough information in the subject to differentiate it from other notifications (for example by using an identifier), but do not include too much information so that other people could know what is going on.
- Everybody's time is valuable, so before taking the time to report a vulnerability, be sure you have thoroughly tested it and are able to reproduce it (if possible, on the latest product version and, definitely, with the latest patches and updates applied). Try to confirm in advance if it is a well known, previously published, vulnerability or not. You don't want to end up wasting everybody's time (including yours) to confirm it is something that was already fixed a while ago.

Credit
Once a fix for the vulnerability is available and it is finally announced, provide credit where appropriate.

Thursday, May 27, 2010

Capturing SMB Files with Wireshark

Most corporate networks include one or more file servers where shared information is stored and shared across the network using the SMB protocol. These servers are used as a repository for different departments, which share the same infrastructure but must have access to different and separate information sets, some of which will probably be very sensitive and confidential, like files belonging to top management, Human Resources or the Legal departmens, just to name a few examples.

The access control to the information in the file servers is enforced using the SMB protocol authentication, usually integrated with some unified directory (like Microsoft Active Directory).

While the authentication can be performed in a secure way, the information flow between the server and consumer is usually not encrypted, as it happens with the default SMB configuration. This makes this information vulnerable to any sniffing activity performed in the company’s internal network.

In our effort to identify weak points of corporate networks, we wanted to demonstrate how this vulnerability could be easily exploited, so that organizations better understand the risk this vulnerability poses for them, and how to protect themselves from it.

For that purpose, we have developped a plugin for the popular network analyzer Wireshark. The plugin adds to Wireshark the ability to extract and save separately, from any network capture, either live or previously saved, the contents of any files transferred between a server and a client using the SMB protocol. We have succesfully used this plug-in in some real pentests, demonstrating the potential impact of this vulnerability.

Once installed, identifying SMB streams in a Wireshark capture is easy: click on Export->Object-> SMB, and look at the windows that pops up, which will look similar to this one:

Then, just selecting the desired file and clicking "Save As" will put the captured file on disk and allow you to open it with the right program.

Please note that not all files will be 100% captured and there are some files that will not fit into memory.

A white paper with further details, as well as the plug-in itself, are freely available at the Lab section of our Web Page.



(UPDATE) This functionality is included in the oficial version of Wireshark from release 1.5.1 on.
(UPDATE) We have released a patch that corrects some bugs in the export object SMB functionality of version 1.5.1. It has been included in development trunk from SVN revision 36979 on. For linux users you can download source code and compile it. For Windows users, a windows installer is available in our lab.

Monday, May 10, 2010

Session-fixation vulnerability in Joomla! (20100423)

At the end of last year, during a web-app pen-test on a target application based on Joomla!, a well-known open-source web-based Content Management System (CMS), I discovered that the Joomla! core session management system was prone to a session-fixation vulnerability. Joomla! failed to change the session identifier after a user authenticates. The issue has been finally made public on April 23, 2010.

As far as I know, and at least according to Google, this is the first sessation-fixation vulnerability found in the information security history. Typos make you famous! ;)

Due to the fact the target web portal I was testing used the Joomla! built-in session management capabilities, before and after authentication, it was vulnerable. An attacker can exploit this vulnerability to hijack a user's session and gain unauthorized access to the protected portions (those requiring authentication) of the affected application.

Joomla! versions 1.5 through 1.5.15 are affected. Although I discovered the issue on version 1.5.14, and notified the Joomla! Security Strike Team (JSST) appropriately, through the Joomla Security Center and by e-mail on early November 2009, the fix couldn't get through the next version. The issue was fixed on version 1.5.16 (while the last available version as of today is 1.5.17).

As an exercise for the reader, you can review the Joomla! source code if interested on the fix; start with  the "libraries/joomla/session/session.php" file (other files are affected too). The Joomla! session cookies are of the form "cookie_name=cookie_value", where both cookie name and value are MD5 hashes. Example:

Set-Cookie: f2345657e5f302e02d18922ba903a4ef=74d0a95cfdf16feb8a9678510686ba63; path=/

Although some public sources reflect session-fixation vulnerabilities can only be exploited by tricking the user into logging in after following a specially crafted link, this is not completely right. There are multiple methods and scenarios that a clever attacker (or pentester) can use to exploit it while combined with other vulnerabilities:
  • The most commonly used method is the one referred above: an attacker can craft a web link that contains the attacker's session id. This is only possible if the application parses GET parameters to set the session id value, instead of, or as well as, cookies. During my tests I confirmed the target Joomla! application was vulnerable to this scenario.
  • If the attacker can add or modify the contents of the vulnerable web server, or a web server on the same domain (depending on the scenario), she can potentially add contents to set the session id to her preferred value.
  • Although XSS vulnerabilities are commonly demonstrated by stealing the victim's cookies, they can be used for the opposite action. An XSS can be leveraged to set the victim's cookie on an unauthenticated state, and gather higher privileges through the session-fixation vulnerability afterwards, once the victim has been authenticated.
  • Another common scenario to exploit session-fixation attacks is focused on intercepting and modifying the victim traffic (unencrypted HTTP session) through MitM attacks. Due to the fact the victim is just browsing an unprotected portion of the site, he is not authenticated yet, the web application uses HTTP (vs. HTTPS). An attacker can easily inject a new cookie header ("Set-Cookie:") on that traffic, and its value will be accepted and used by the victim web browser.

Finally, there are a few lessons we should learn, and things we can improve, regarding the management of web application vulnerabilities (some of these also apply to other types of vulnerabilities). As I'm currently dealing with similar issues in other open-source and commercial web applications, I want to share a few thoughts:
  • The resources and procedures available on lots of open-source projects and commercial companies to be notified and act upon security vulnerabilities can be clearly improved. I will suggest a few specific recommendations on a future post.
  • Session-fixation vulnerabilities are extremely prevalent in lots of web applications and web-based products still today (wait for future related posts on this blog :).
  • Although in theory, session-fixation issues can be simply fixed by using a different session id before and after authentication, it turns out that fixing session-fixation (as well as other session management) vulnerabilities is not an easy task. It typically affects multiple portions of the application, other related applications and modules, such as the authentication and authorization code. As a result, fixing it sometimes requires to fully redesign the web application.
  • Based on the previous assertion, session management countermeasures must be planned and tested during the application design phase, unless you really want to feel the pain of fixing them afterwards.
Keep your sessions safe!

UPDATE: This finding has been assigned Taddong's vulnerability id TAD-2010-001 for any further reference. A Taddong security advisory won't be released for it unless we identify enough demand from the infosec community.

Wednesday, April 28, 2010

Certificate-based Client Authentication in WebApp PenTests

One of the key attack tools to perform effective Web Application Penetration Tests (WebApp PenTest) are interception proxies, allowing the analyst to inspect and modify all the requests and responses exchanged between the web browser and the target web application. Some of the most popular ones are developed in Java, such as Paros, Webscarab or Burp, being the Java platform a prerequisite to run.

Sun/Oracle has recently released new updates for Java: Java 6 Update 19 on March 2010, fixing 27 security issues, and Java 6 Update 20 on April 2010, including a couple of fixes. If you have updated the Java version of your pentesting system (You did, didn't you?), you must be aware that your interception proxies won't be able to audit web applications that make use of client X.509 certificates for authentication. This specifically affects pentests on e-government and e-banking web applications making use of client certificates, such as those stored on smart cards (like some European national identity cards); in particular for Spain, dozens of websites integrate authentication through the electronic national id card, "DNI electronico" (DNIe).

The reason is that Java 6 Update 19 includes a fix for the famous SSL/TLS renegotiation vulnerability from November 2009 (CVE-2009-3555). The SSL/TLS renegotiation feature is specifically used by certificate-based client authentication, and the fix disables SSL/TLS renegotiation in the Java Secure Sockets Extension (JSSE) by default. As a result, when you try to access a web resource that requires certificate-based client authentication through the interception proxy, it generates the following Java SSL/TLS error message (javax.net.ssl.SSLException): "HelloRequest followed by an unexpected  handshake message".

Webscarab error message:

Burp error message:


However, it is still possible to re-enable the SSL/TLS renegotiation in Java by setting the new system property sun.security.ssl.allowUnsafeRenegotiation to true before the JSSE library is initialized. The following Windows command line launches Burp with SSL/TLS renegotiation enabled:

C:\>java -jar -Xmx512m -Dsun.security.ssl.allowUnsafeRenegotiation=true "C:\Program Files\burpsuite_pro_v1.3\burpsuite_pro_v1.3.jar"

Keep your WebApp PenTests rolling!

Shameless plug: Interested on learning the art of WebApp PenTesting? I will be teaching SANS SEC542, "Web Application Penetration Testing and Ethical Hacking", in London (May 10-15, 2010) in English and in Madrid (September 20-25, 2010) in Spanish.

Saturday, April 24, 2010

Manual Verification of SSL/TLS Certificate Trust Chains using Openssl (Part 2/2)

Part 1 of this article covered how to manually verify the SSL/TLS certificate trust chain for a given "invalid" certificate using openssl. We used the Internet Storm Center certificate as an example, whose chain has three elements: the ISC (isc.sans.org) certificate, an intermediate USERTrust CA, and the Entrust root CA.


A quick look in the Firefox Preferences (Mac OS X) or Options (Windows and Linux), and specifically on the "Advanced - Encryption - View Certificates - Authorities" section, confirms the intermediate CA certificate from USERTrust was the one missing on Firefox 3.6.3 and, therefore, the one invalidating the certificate trust chain. None of the available USERTrust certificates has the right fingerprint, "af:a4:40:af...86:16".


The client browser does not have the intermediate certificate to be able to verify the full certificate trust chain, and generates the error.

The most common method to avoid this type of certificate validation errors at the web server level, thus for all the web server clients, is by delivering the missing intermediate certificate from the web server itself to the client at connection time.

In the Apache web server world, you simply need to get a copy of the intermediate certificate, in this case "USERTrustLegacySecureServerCA.crt" (see Part 1), and enter a reference to it through the "SSLCertificateChainFile" directive in the Apache configuration file, "httpd.conf", and specifically, in the section associated to the virtual host. Example for the ISC web server (not the real config file):

<virtualhost 10.10.10.10:443>
DocumentRoot /var/www/html
ServerName isc.sans.org
SSLEngine on
SSLCertificateFile /path/to/isc.sans.org.crt
SSLCertificateKeyFile /path/to/isc.sans.org.key
SSLCertificateChainFile /path/to/USERTrustLegacySecureServerCA.crt
</virtualhost>

These three mod_ssl directives point to the server certificate, the server private key, and the intermediate CA certificate, respectively.

End-user awareness regarding the acceptance of invalid digital certificates is a must!

Manual Verification of SSL/TLS Certificate Trust Chains using Openssl (Part 1/2)

This week, during my Internet Storm Center (ISC) shift, Firefox 3.6.3 (the latest available version) displayed a digital certificate error when accessing the ISC login page through SSL/TLS: https://isc.sans.org/myisc.html. I confirmed this on a couple of Firefox instances running on Mac OS X and Windows XP.


We also got a few reports from ISC readers on the same issue, although other people running the same browser version, and even language (EN), on the same OS platforms, didn't get any error message. Finally, the reason was a new ISC digital certificate had been recently installed, and the required intermediate certificate was missing in some web browsers. As a result, the browser couldn't validate the full digital certificate chain to ensure you were really connecting to the website you intended to connect to.

This is a common scenario on security incidents, where Man-in-the-Middle (MitM) attacks or direct web server breaches modify the SSL/TLS certificate offered to the victim, and when accidentally accepted, the attacker can intercept and modify the "secure" HTTPS channel. As you may find yourself dealing with a similar situation in the future... how can you (as I did) check what is the real reason behind the SSL/TLS certificate validation error? By manually verifying the SSL/TLS certificate trust chain, or certificate hierarchy, through openssl.

The goal is to manually follow all the validation steps that are commonly performed it an automatic way by the web browser.

Step 1: Check the certificate validation error and download the controversial digital certificate.

$ openssl s_client -connect isc.sans.org:443
depth=0 /C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 /C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
verify error:num=27:certificate not trusted
verify return:1
depth=0 /C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
verify error:num=21:unable to verify the first certificate
verify return:1
CONNECTED(00000003)
---
Certificate chain
0 s:/C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
i:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/CN=USERTrust Legacy Secure Server CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGATCCBOmgAwIBAgIQOxCOI6FirgnSgN/fCRi57jANBgkqhkiG9w0BAQUFADB/
MQswCQYDVQQGEwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBD
aXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxKjAoBgNVBAMTIVVT
RVJUcnVzdCBMZWdhY3kgU2VjdXJlIFNlcnZlciBDQTAeFw0xMDAzMzEwMDAwMDBa
Fw0xMTAzMzEyMzU5NTlaMIH5MQswCQYDVQQGEwJVUzEOMAwGA1UEERMFMjA4MTQx
ETAPBgNVBAgTCE1hcnlsYW5kMREwDwYDVQQHEwhCZXRoZXNkYTESMBAGA1UECRMJ
U3VpdGUgMjA1MRowGAYDVQQJExE4MTIwIFdvb2Rtb250IEF2ZTEbMBkGA1UEChMS
VGhlIFNBTlMgSW5zdGl0dXRlMSgwJgYDVQQLEx9OZXR3b3JrIE9wZXJhdGlvbnMg
Q2VudGVyIChOT0MpMSYwJAYDVQQLEx1Db21vZG8gVW5pZmllZCBDb21tdW5pY2F0
aW9uczEVMBMGA1UEAxMMaXNjLnNhbnMub3JnMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAvYLqphsusuyiVDookRrfsZDx0T1SOf1EQmzUFSK0qVTv+RKM
VanUSwsbWuMYrS6W7bGld2/qCGuk2rxMjwtiNE1x2rtaXUkngoKfnQKH+zGErbmc
Ead2IjY5OANT6xnxfJS8a29XTqRrKJW8c/PR9NYCGUmq1X+0EnB4+wPZbDsebZ3h
2wISxDaEJlOYbc3MzNyKwbTkwsyn1uwqZS0DQyKqVGL4hZeVmy5OwW9b8/RLYOov
NYxTH0bHlbiaOuwakm4cx/ZJMELSBhbgwjt2sLkD8CShdES6fHPPAeMLW//ir3em
RMXABEnF1wE5CmvMDe9e6b/joHurw34BISM8twIDAQABo4IB/DCCAfgwHwYDVR0j
BBgwFoAUr6RAr58W/qsx/fvVl4v1kaMkhhYwHQYDVR0OBBYEFN6hBRDWEgv4hVZI
hR1/CjtAx8k2MA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQW
MBQGCCsGAQUFBwMBBggrBgEFBQcDAjBCBgNVHSAEOzA5MDcGDCsGAQQBsjEBAgED
BDAnMCUGCCsGAQUFBwIBFhlodHRwczovL2Nwcy51c2VydHJ1c3QuY29tMEsGA1Ud
HwREMEIwQKA+oDyGOmh0dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9VU0VSVHJ1c3RM
ZWdhY3lTZWN1cmVTZXJ2ZXJDQS5jcmwwfQYIKwYBBQUHAQEEcTBvMEYGCCsGAQUF
BzAChjpodHRwOi8vY3J0LnVzZXJ0cnVzdC5jb20vVVNFUlRydXN0TGVnYWN5U2Vj
dXJlU2VydmVyQ0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1
c3QuY29tMGkGA1UdEQRiMGCCDGlzYy5zYW5zLm9yZ4IRaXNjLmluY2lkZW50cy5v
cmeCDGlzYy5zYW5zLmVkdYINaXNjMS5zYW5zLm9yZ4INaXNjMi5zYW5zLm9yZ4IR
d3d3LmluY2lkZW50cy5vcmcwDQYJKoZIhvcNAQEFBQADggEBAGBhcG/MGgAsiwsB
AAg4ZYndAEukeYXpIbXzU5T9ZqkqHPWiUearWDzBgWKJcpgF+v9ZCqCrKuYbXXnv
zqZi+BPKX3BSMxHlDUZ0C6tU/G6+FqcV4j19S+xPjAVvk28yqaZc9BNpLnCogAc/
F3gHL7SwCDFowP+RaqUlbUr1UtQgKDILWSOM4ukoVMbCt5nNmyXu0eDFb9tlWkJK
KdPYzuKCYfWS7/9bQ868fML3m1xCZqG7L0t/XAFSZYN3ytqtbRTyOjcjdEwSxwA5
O3gV+dW6qM7/AmGZu+0/Grw3WMwiXzO8tRAHYliNz9PDqnK2k5NE0VbKKhndsoRc
oAA+AfY=
-----END CERTIFICATE-----
subject=/C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
issuer=/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/CN=USERTrust Legacy Secure Server CA
---
No client certificate CA names sent
---
SSL handshake has read 2233 bytes and written 325 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 2048 bit
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : DHE-RSA-AES256-SHA
Session-ID: 08BED94D2BBA7E525FB37BFE20DCD155CE62C93871B41ABBDF810D663FFC4A61
Session-ID-ctx:
Master-Key: 620F10AF948333D43BCC2656E4493563C4A827A8BFAD46AF0815CF3643C602C0E1EBA3CD5DBFE0C4BA65F2DBD9762DF2
Key-Arg : None
Start Time: 1271777232
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
closed

From the output, ans specifically the verify return code at the end, you can see that the server certificate cannot be verified.

First of all, create a "certs" directory to put all the required files in. Copy and paste to a file ("ISC.pem") the digital certificate, that is, the text between "-----BEGIN CERTIFICATE-----" to "-----END CERTIFICATE-----" (including both lines).

Step 2: Identify the issuer and get its certificate.

Open the "ISC.pem" certificate file (by double-clicking on it on most operating systems) and inspect the following fields:
  • The certificate thumbprint or fingerprint that identifies the server certificate: "bd:95:df:ac...46:aa" (SHA1).
  • Issuer (under the "Certificate" section): Who did generate and issue the server certificate? "USERTrust Legacy Secure Server CA" from "The USERTRUST Network".
  • The "Certificate Authority Key Identifier" or fingerprint (under "Certificate - Extensions"): "af:a4:40:af...86:16".
  • The "Authority Information Access" (under the same section): It contains a pointer to the digital certificate of the issuer certification authority (CA): "URI: http://crt.usertrust.com/USERTrustLegacySecureServerCA.crt".
Obtain a copy of the issuer certificate. The most secure option would be to get its certificate through HTTPS and not HTTP, but this only depends on how the CA decided to make it available. Double check with the CA website that the URL and the fingerprint are valid. In this case, USERTrust was acquired by Comodo, and the issuer certificate is available here (https link) and referenced in its list of certificates. This certificate belongs to the USERTrust intermediate CA and was the one not available in Firefox 3.6.3 by default, hence, the root cause of the initial SSL/TLS error on the ISC website.

Although you might be tempted to perform the manual verification all from the command line, it is not the most secure option, as you could be forced to use http vs. https when using wget or curl. Depending on the version and platform of these tools, they may be distributed without a default list of trusted root certificates or do not use the list available on the system. Therefore, ** this is NOT the way to get the intermediate certificate **, use a web browser instead:

$ wget http://crt.usertrust.com/USERTrustLegacySecureServerCA.crt
--2010-04-20 17:32:44--  http://crt.usertrust.com/USERTrustLegacySecureServerCA.crt
...
2010-04-20 17:32:45 (32.0 MB/s) - `USERTrustLegacySecureServerCA.crt' saved [1073/1073]
$

Step 3: Try to verify the digital certificate again, but this time make use of the previously downloaded certificate ("USERTrustLegacySecureServerCA.crt").

Before using the downloaded certificate, we need to convert it to the PEM format (not required this time; exemplified later), and build the certificates directory required by the openssl "-CApath" option. The Unix "c_rehash" script helps to create the appropriate directory structure and certificate hash symbolic links. Be sure to rename all the certificates in PEM format to .pem, such as "USERTrustLegacySecureServerCA.crt":

$ c_rehash ./certs
Doing ./certs
ISC.pem => fc1aa8ab.0
USERTrustLegacySecureServerCA.pem => cf831791.0 
$

If we try to validate the certificate again, and if we already have the certificates for all the intermediate and root CA's identified in the trust certificate chain stored on the "certs" directory, we will get a positive response: "Verify return code: 0 (ok)".

$ openssl s_client -CApath ./certs -connect isc.sans.org:443
CONNECTED(00000003)
depth=2 /C=US/O=Entrust.net/OU=www.entrust.net/CPS incorp. by ref. (limits liab.)/OU=(c) 1999 Entrust.net Limited/CN=Entrust.net Secure Server Certification Authority
verify return:1
depth=1 /C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/CN=USERTrust Legacy Secure Server CA
verify return:1
depth=0 /C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
verify return:1
---
Certificate chain
 0 s:/C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
   i:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/CN=USERTrust Legacy Secure Server CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGATCCBOmgAwIBAgIQOxCOI6FirgnSgN/fCRi57jANBgkqhkiG9w0BAQUFADB/
...
oAA+AfY=
-----END CERTIFICATE-----
subject=/C=US/postalCode=20814/ST=Maryland/L=Bethesda/streetAddress=Suite 205/streetAddress=8120 Woodmont Ave/O=The SANS Institute/OU=Network Operations Center (NOC)/OU=Comodo Unified Communications/CN=isc.sans.org
issuer=/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/CN=USERTrust Legacy Secure Server CA
---
No client certificate CA names sent
---
SSL handshake has read 2233 bytes and written 325 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 2048 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: C898C8DB5CD9CDFEE404451BA3E19A440951A1960DAC1BA62FD35F23D9772B30
    Session-ID-ctx:
    Master-Key: EC4D939A112112AAAB01DFF5FA0A5F6C26C568C8DEBBDF3A61515E8CD83F257DAB5894BC450A97A7EE5ABAB0B1893795
    Key-Arg   : None
    Start Time: 1271778616
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

---
closed

If the certificate chain or hierarchy contains additional certificates, that is, there are multiple intermediate CA's involved, you may need to repeat the same process and download the certificates for all the other intermediate CA's and the root CA (omitted for brevity). For example, the intermediate USERTrust certificate was issued by "Entrust.net Secure Server Certification Authority". This root CA certificate can be manually obtained in DER format from Entrust website, with a fingerprint of "f0:17:62:13...d0:1a".

Once again, this DER file must be converted to PEM format using openssl:

$ openssl x509 -in entrust_ssl_ca.der -inform DER -outform PEM -out entrust_ssl_ca.pem

Finally, you will need to rebuild the certificates directory again, using "c_rehash", once it contains all the intermediate and root CA certificate files that belong to the certificate chain being tested, and try to verify the certificate again.

Part 2 of this article covers the chain layout for the ISC certificate in this case, how to identify the missing certificate on the web browser trust certificates list, and how to avoid this kind of error from the web server perspective, using Apache.

Tuesday, April 20, 2010

The Birth of the Taddong Security Blog

Almost four years ago we founded our first information security blog, RaDaJo. Today, with the establishment of our new company, Taddong, we are starting a new blog.

Taddong is an information security company established in Spain in 2010, with a worldwide service scope, focused on discovering and eliminating or mitigating the real risks that threaten the customer's networking and information technology infrastructures. We are a team of four highly skilled professionals in the information security field, with proven experience in different information technology areas: David Perez, Jose Pico, Monica Salas, and Raul Siles. More information about the company, including the meaning of Taddong, is available on its website, under company description.

The Taddong Security Blog will be our main channel for the publication of new articles, tools, and additional security research. As its predecessor, this blog is born to publish our own analysis and thoughts of cutting-edge information security topics and research we are involved in, from a pure technical perspective. It will also host the sequel of the book reviews series previously published on RaDaJo.

If you were a previous RaDaJo blog reader and follower, we will be glad to have you with us again in this new journey. If you are joining us for the first time, we welcome you on board. In any case, the blog relies on your presence: do not forget to subscribe to this new blog by using the syndication buttons available on the sidebar. Effective immediately, we are switching from RaDaJo to Taddong's blog.

You can contact us by e-mail:
info @ taddong .com