Best Approach for removing XSS Vulnerability
I have been developing a Webobjects application, and I found that my application is vulnerable to XSS through URL, but not when malicious input like
<script>alert("hi")</script>is input to form fields.
So, i have currently employed technique of URL rewriting in apache web server to solve this issue.
I dont know enough about XSS.
I want to know from experts here, is this is the right approach to solve XSS when the input to form fields is not showing vulnerability ?
A blacklisting approach is not the correct way to approach this. You need to determine why your URLs are vulnerable and why your form field input is not. I suspect you might want to check a little more in-depth that your form field input is not vulnerable. Are you actually actively sanitizing those fields on your server side or simply allowing the browser to "enforce" sanitizing for you? If the latter, then you have vulnerabilities on your form fields as well.
Nope. You should not try to fix XSS by doing URL rewriting in your Apache web server. That's not a good way to go about it, as the result will be fragile at best. In particular, if you stick with your current approach, there will most likely still be sneaky ways to exploit the XSS.
Instead, if the web application has XSS holes in it, fix the darn web application. This is an application security problem; you have to fix it by fixing the application. Trying to patch things up externally is probably going to be leaky like a sieve.
P.S. Your list of keywords is insufficient. You've built a blacklist, and like any other blacklist, your blacklist is inevitably incomplete. You're missing some stuff (* cough *
onerror* cough *). I'm not going to try to provide you with a more complete list, because the approach is fundamentally broken and rather than sticking with the approach and trying to extend your list of attributes to filter -- you need to ditch the current approach entirely and fix the problem at its source.