Many web developers believe that newer Microsoft development frameworks such as ASP.NET (aka .NET) including MVC, Web Forms, or .NET Core protect them from many of the security vulnerabilities present in classic ASP and other older technologies. However, this is a false sense of security, as more than 67% of .NET web sites (of 7,184 scanned) are nonetheless at-risk, largely due to easily preventable oversights. This incredibly high percentage demonstrates that developers still struggle to get basic .NET security configurations right, even today. Though this blog post is somewhat dated, every listed issue is still a potential pitfall in modern web development with the exception of Hash DoS. Critically, this post highlights all the ways developers neglect to address even these lowest hanging of fruit, including disabling the public display of errors/stack trace, ensuring request validation remains enabled, and protecting application error logs.
Let’s first cover custom errors, stack trace, and tracing. Oftentimes developers will set custom errors mode to “Off” (e.g, using in a web.config file) in order to help them to more efficiently debug application errors that inevitably occur. Or, they may enable tracing, which appends diagnostic info such as server variables, web form control tree info, framework version and timing of events to the output of the page (e.g., by setting the Trace attribute in a page’s @page directive to “True”). However, many forget to change these settings in production environments – as much as 50% of the time according to the article. The problem, however, is that displaying application errors or trace info leads to a great deal of information leakage that can be useful to hackers, including the technical details of the web application, environment, user-specific data, database connection strings, etc. This gives malicious users a significant head start in understanding the underpinnings of a site in order to know how to best attack it. And, as the article points out, keeping .NET web apps locked down is important to not only protect against known vulnerabilities but also those that are yet to be discovered.
Unfortunately, I was able to find examples of this oversight on campus. E.g., here are a couple of CSU sites currently revealing errors to unauthenticated users: https://wsnet2.colostate.edu/CWIS53/apply/Login/%3Cscript%3Ealert(‘hi’)%3C/script%3E and https://wsnet2.colostate.edu/cwis466/rentals/Login.aspx?Name=Sh%3Cscript%3Ealert(%22Hello%22);%3C/script%3Eaun (for full disclosure, this second example is my own archived RentalSearch system that is nonetheless published to a production server). Because custom errors are set to Off, the full stack trace leading to the error is displayed on the page. The solution is to simply set the the customErrors attribute in one’s web.config file to either be “On” or “RemoteOnly” like so:
Finally, the article addresses unprotected error logs – specifically ELMAH, which is one of the more popular ways of logging errors on an ASP.NET website. Specifically, ELMAH’s logs are often accessible online by appending /elmah.axd to the application’s root URL. Once again, I was able to find instances at CSU, including https://wsnet2.colostate.edu/cwis466/rentals/elmah.axd. As the article points out, exposing error logs in such a way offers an abundance of critical information to hackers, allowing them to do all sorts of bad things including session hijacking. Taking the example above, if one were to visit https://wsnet2.colostate.edu/cwis466/rentals/Login.aspx?Name=Tr%3Cscript%3Ealert(%E2%80%98Hello%E2%80%99);%3C/script%3Eoy, one can quickly see that the error is logged and immediately viewable online. Addressing this issue is once again easy – you can either require authorization to view the logs or disable them entirely:
Being an ASP.NET developer myself, this topic is of special interest to me. It illustrates how easy it is to make an application insecure, be it in 2012 or 2020. Clients are typically focused on project deliverables/functionality, and in the interest of meeting those goals as quickly as possible, it’s quite trivial to overlook even such a simple thing as a one-line configuration vulnerability. The article noted that all of these vulnerabilities are “dead simple” to fix and don’t require re-architecting a site – they literally take seconds to address. However, most of these issues have no visual evidence of the vulnerability – you have to remember to specifically seek it out and close the data leakage.
Besides this initial takeaway, I am also surprised that more than 67% of tested .NET sites present one or more of these vulnerabilities. This article served as a reminder that I need to peruse each of my own sites to ensure that these risks are mitigated. Even though I understand everything written in the article, I am nonetheless guilty of not closing all of these security holes in my applications, but I plan to do so as soon as possible!