Footprinting is the act of gathering information about a computer system and the companies it belongs to. Footprinting is the first step hackers take in their hacking process. Footprinting is important because to hack a system the hacker must first know everything there is to know about it. Below I will give you examples of the steps and services a hacker would use to get information from a website.
- First, a hacker would start gathering information on the targets website. Things a hacker would look for are e-mails and names. This information could come in handy if the hacker was planning to attempt a social engineering attack against the company.
- Next the hacker would get the IP address of the website. By going to http://www.selfseo.com/find_ip_address_of_a_website.php and inserting the web site URL, it will spit out its IP address.
- Next the hacker would Ping the server to see if it is up and running. There’s no point in trying to hack an offline server. http://just-ping.com pings a website from 34 different locations in the world. Insert the website name or IP address and hit “Ping”. If all packets went through, then the server is up.
- Next the hacker would do a Whois lookup on the company website. Go to http://whois.domaintools.com and put in the target website. As you can see this gives a HUGE amount of information about the company. You see the company e-mails, address, names, when the domain was created, when the domain expires, the domain name servers, and more!
- A hacker can also take advantage of search engines to search sites for data. For example, a hacker could search a website through Google by searching “site:WWW.the-target-site.com” this will display every page that Google has of the website. You could narrow down the number of results by adding a specific word after. For example the hacker could search “site:WWW.the-target-site.com email”. This search could list several emails that are published on the website. Another search you could do in Google is “inurl:robots.txt this would look for a page called robots.txt. If a site has the file “robots.txt”, it displays all the directories and pages on the website that they wish to keep anonymous from the search engine spiders. Occasionally you might come across some valuable information that was meant to be kept private in this file.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.