Because it is open outbound from the firewall, many applications send their traffic across port 80 to avoid firewall issues. This has led to port 80 being called the Firewall Traversal Exploit. Port 443 then is the Secure Firewall Traversal Exploit because it allows traffic out in an encrypted fashion.
Because its encrypted users bypass protections in place for HTTP to download viruses, access forbidden sites and leak confidential information. This is limited only by the availability of SSL sites. In recent years webmail like GMail has gone to full SSL sessions. Bad guys can easily set up SSL as well. Without a SSL proxy, all you can do to address these concerns is block by IP address. IP addresses change frequently and are less likely to be categorized in a URL block list.
When you use a SSL proxy, the web traffic is terminated at the proxy server and a new request is made to the remote server. The client browser uses a certificate from the proxy to secure data during the first leg of this transaction. This will result in a certificate error if you don’t deploy the proxy’s self-signed certificate as a trusted root. Because the client never sees the certificate of the remote server, the user does not get information about the trustworthiness of that certificate. For this reason it is necessary to either block all bad certificates or make sure your SSL proxy can pass on that certificate info when the certificate is expired or does not chain to a trusted root.
The SSL proxy can use the hostname (CN) in the server certificate to make a URL categorization decision to intercept or tunnel the traffic.
Because you can intercept based on URL categorization, you could choose to intercept (and block) only websites that are in your blocked categories. This is the simplest implementation of a SSL proxy. It blocks site that wouldn’t have been blocked before and it doesn’t interfere with anything else. If a computer doesn’t have your certificate in their trusted root, it’s not that bad because the site would have been blocked anyway.
A slightly more intrusive step is to also intercept webmail sites. Webmail sites have the potential to download malware although the site itself is valid. By intercepting the site the download is scanned by the antivirus layer. A related idea is intercepting all uncategorized sites so they can be scanned.
A full implementation involves intercept everything not categorized as a financial site. It is not recommended to intercept financial websites for obvious reasons.
Intercepting everything allows you to scan all downloads for viruses. The main drawback is you’ll have more issues with web applications not conforming to HTTP standards.
I think the simplest option of only intercepting websites classified in categories on your block list is best. It provides additional security without potential for complications. You’d have to make a security decision for your own environment.
There are security considerations to intercepting traffic. When you only intercept a site to block it you don’t have sensitive data but as you intercept other categories, you must take care. Sensitive data may now be exposed in clear text. You may want to think twice about what you are logging and caching. If any offbox analysis is performed you need to encrypt the connection and make sure nothing is on the remote box.
A lot of attacks occur over the web and its important to provide the best defense. It’s no longer good enough to ignore 443/TCP.
Go to Source