Recently I discovered that in my Firefox extension CraigZilla, results were being cached by Firefox.
The typical approach to disabling caching is to append a string on to the end of the URL that will be unique each time you fetch it. The most typical method is to add a timestamp, which I do often for JS and CSS files.
1 2 3 4 5 6 7 | var url = "http://raleigh.craigslist.org/ele/index.rss"; var request = new XMLHttpRequest(); xmlhttp.open('GET', url += (url.match(/\?/) == null ? "?" : "&") + (new Date()).getTime() ,true); xmlhttp.onreadystatechange = function() { callbackFunction(); } xmlhttp.send(null); |
This turned out to not work in my case however, because Craigslist won’t ignore the parameter they don’t want and therefore marks the page as non existent, gives a 301 (Permanent redirect), and then forwards you to the page you are trying to not cache. I was therefore faced to do this at the browser level. I discovered that Gecko-based browsers such as Mozilla Firefox have such a method, but I am pretty sure that it will only work with escalated privileges. In other words, you can’t use this for your generic web site JS script.
1 2 3 4 5 6 7 8 9 10 11 12 13 | var url = "http://raleigh.craigslist.org/ele/index.rss"; var request = new XMLHttpRequest(); xmlhttp.open('GET', url ,true); // Mozilla only, Gecko more specifically. try { xmlhttp.channel.loadFlags = Components.interfaces.nsIRequest.LOAD_BYPASS_CACHE; }catch(e){ // Something here } xmlhttp.onreadystatechange = function() { callbackFunction(); } xmlhttp.send(null); |