fun with google

Download Fun with Google

Post on 11-Apr-2015

580 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

Fun withPart 1: Power searches and reconnaissance Jeremy Rasmussen 9/23/05

Im feeling lucky The Google interface Preferences Cool stuff Power searching

Classic interface

Custom interface

Language prefs

Google in H4x0r

Language Proxy server can be used to hide location and identity while surfing Web Google sets default language to match country where proxy is If your language settings change inexplicably, check proxy settings You can manipulate language manually by fiddling directly with URL

Google scholar

Google University search

Google groups

Google freeware Web accelerator Google earth Picasa Etc.

Golden rules of searching Google is case-insensitive Except for the Boolean operator OR, which must be written in uppercase

Wildcards not handled normally * nothing more than a single word in a search phrase; provides no additional stemming

Google stems automatically Tries to expand or contract words automaticallycant lead to unpredictable results

Golden rules of searching Google ignores stop words Who, where, what, the, a, an Except when you search on them individually Or when you put quotes around search phrase Or when you +force +it +to +use +all +terms Largest possible search?

Google limits you to a 10-word query Get around this by using wildcards for stop words

Boolean operators Google automatically ANDs all search terms Spice things up with:OR | NOT

Google evaluates these from left to right Search terms dont even have to be syntactically correct in terms of Boolean logic

Search example What does the following search term do: Intext:password | passcode intext:username | userid | user filetype:xls Locates all pages that have either password or passcode in their text. Then from these, show only pages that have username, userid, or user. From these, it shows only .XLS files. Google not confused by the lousy syntax or lack of parentheses.

URL queries Everything that can be done through the search box can be done by manually entering a URL The only required parameter is q (query) www.google.com/search?q=foo String together parameters with & www.google.com/search?q=foo&hl=en(Specifies query on foo and language of English)

Some advanced operators intitle - search text within the title of a page URL: as_occt=title

inurl - search text within a given URL. Alows you to search for specific directories or folders URL: as_occt=url

filetype - search for pages with a particular file extension URL: as_ft=i&as_filetype=

site - search only within the specified sites. Must be valid top-level domain name URL: as_dt=i&as_sitesearch=

Some advanced operators link - search for pages that link to other pages. Must be correct URL syntax; if invalid link syntax provided, Google treats it like a phrase search URL: as_lq daterange - search for pages published within a certain date range. Uses Julian dates or 3 mo, 6 mo, yr. As_qdr=m6 (searches past six months) numrange- search for numbers within a range from lowhigh. e.g., numrange:99-101 will find 100.

Alternatively, use 99..101

URL: as_nlo=&as_nhi= Note Google ignores $ and , (makes searching easier)

Advanced operators cache - use Google's cached link of the results page. Passing invalid URL as parameter to cache will submit query as phrase search. URL:

info - shows summary information for a site and provides links to other Google searches that might pertain to the site. Same as supplying URL as a search query. related - shows sites Google thinks are similar. URL: as_rq

Google groups operators author - find a Usenet author group - find a Usenet group msgid - find a Usenet message ID insubject - find a Usenet subject lines (similar to intitle:)

These are useful for finding people, NNTP servers, etc.

Hacking Google Try to explore how commands work together Try to find out why stuff works the way it does E.g., why does the following return > 0 hits? (filetype:pdf | filetype:xls) -inurl:pdf -inurl:xls

Surfing anonymously People who want to surf anonymously usually use a Web proxy Go to samair.ru/proxy and find a willing, open proxy; then change browser configs E.g., proxy to 195.205.195.131:80 (Poland) Check it via: http://www.allnettools.com/toolbox,net Resets Google search page to Polish

Google searches for proxies inurl:"nph-proxy.cgi" "Start browsing through this CGI-based proxy E.g., http://www.netshaq.com/cgiproxy/nphproxy.cgi/011100A/

"this proxy is working fine!" "enter *" "URL***" * visit E.g., http://web.archive.org/web/20050922222155/h ttp://davegoorox.c-f-h.com/cgiproxy/nphproxy.cgi/000100A/http/news.google.com/web hp?hl=en&tab=nw&ned=us&q=

Caching anonymously Caching is a good way to see Web content without leaving an entry in their log, right? Not necessarilyGoogle still tries to download images, which creates a connection from you to the server. The cached text only will allow you to see the page (sans images) anonymously Get there by copying the URL from Google cache and appending &strip=1 to the end.

Using Google as a proxy Use Google as a transparent proxy server via its translation service Translate English to English: http://www.google.com/translate?u=http%3A %2F%2Fwww.google.com&langpair=en% 7Cen&hl=en&ie=Unknown&oe=ASCII Doh! Its a transparent proxyWeb server can still see your IP address. Oh well.

Finding Web server versions It might be useful to get info on server types and versions E.g., Microsoft-IIS/6.0 intitle:index.of E.g., Apache/2.0.52 server at intitle:index.of E.g., intitle:Test.Page.for.Apache it.worked! Returns list of sites running Apache 1.2.6 with a default home page.

Traversing directories Look for Index directories Intitle:index.of inurl:/admin/*

Or, Try incremental substitution of URLs (a.k.a. fuzzing) /docs/bulletin/1.xls could be modified to /docs/bulletin/2.xls even if Google didnt return that file in its search

Finding PHP source PHP script executes on the server and presents HTML to your browser. You cant do a View Source and see the script. However, Web servers arent too sure what to do with foo.php.bak file. They treat it as text. Search for backup copies of Web files: inurl:backup intitle:index.of inurl:admin php

Recon: finding stuff about people Intranets inurl:intranet intitle:human resources inurl:intranet intitle:employee login

Help desks inurl:intranet help.desk | helpdesk

Email on the Web filetype:mbx intext:Subject filetype:pst inurl:pst (inbox | contacts)

Recon: Finding stuff about people Windows registry files on the Web! filetype:reg reg +intext:|internet account manager

A million other ways: filetype:xls inurl:email.xls inurl:email filetype:mdb (filetype:mail | filetype:eml | filetype:pst | filetype:mbx) intext:password|subject

Recon: Finding stuff about people Full emails filetype:eml eml +intext:"Subject" +intext:"From" 2005

Buddy lists filetype:blt buddylist

Rsums "phone * * *" "address *" "e-mail" intitle:"curriculum vitae Including SSNs? Yes

Recon: Finding stuff about people

Site crawling All domain names, different ways site:www.usf.edu returns 10 thousand pages site:usf.edu returns 2.8 million pages site:usf.edu -site:www.usf.edu returns 2.9 million pages site:www.usf.edu -site:usf.edu returns nada

Scraping domain names with shell scripttrIpl3-H> trIpl3-H> lynx dump \ "http://www.google.com/search?q=site:usf.edu +-www.usf.edu&num=100" > sites.txt trIpl3-H> trIpl3-H> sed -n 's/\. http:\/\/[[:alpha:]]*.usf.edu\//& /p' sitejunk.txt >> sites.out trIpl3-H> trIpl3-H> trIpl3-H>

Scraping domain names with shell scriptwww.cas.usf.edu anchin.coedu.usf.edu library.arts.usf.edu www.coba.usf.edu listserv.admin.usf.edu catalog.grad.usf.edu www.coedu.usf.edu mailman.acomp.usf.edu ce.eng.usf.edu www.ctr.usf.edu modis.marine.usf.edu cedr.coba.usf.edu www.eng.usf.edu my.usf.edu chuma.cas.usf.edu www.flsummit.usf.edu nbrti.cutr.usf.edu comps.marine.usf.edu www.fmhi.usf.edu nosferatu.cas.usf.edu etc.usf.edu www.marine.usf.edu planet.blog.usf.edu facts004.facts.usf.edu www.moffitt.usf.edu publichealth.usf.edu fcit.coedu.usf.edu rarediseasesnetwork.epi.usf.edu www.nelson.usf.edu www.plantatlas.usf.edu tapestry.usf.edu fcit.usf.edu www.registrar.usf.edu ftp://modis.marine.usf.edu usfweb.usf.edu www.research.usf.edu usfweb2.usf.edu hsc.usf.edu www.reserv.usf.edu https://hsccf.hsc.usf.edu w3.usf.edu www.safetyflorida.usf.edu web.lib.usf.edu https://security.usf.edu www.sarasota.usf.edu web.usf.edu isis.fastmail.usf.edu www.stpt.usf.edu web1.cas.usf.edu www.acomp.usf.edu www.career.usf.edu www.ugs.usf.edu www.usfpd.usf.edu www.wusf.usf.edu

Using Google API Check out http://www.google.com/apis Google allows up to 1000 API queries per day. Cool Perl script for scraping domain names at www.sensepost.com: dns-mine.pl By using combos of site, web, link, about, etc. it kind find a lot more than previous example

Perl scripts for Bi-Directional Link Extractor (BiLE) and BiLE Weight also available. BiLE grabs links to sites using Google link query BiLE weight calculates relevance of links

Remote anonymous scanning with NQT Google query: filetype:php inurl:nqt intext:"Network Query Tool Network Query Tool allows: Resolve/Reverse Lookup Get DNS Records Whois Check port Ping host Traceroute

NQT form also accepts input from XSS, but it is still unpatched at this point! Using a proxy, perform anonymous scan via the Web Even worse, attacker can scan the internal hosts of networks hosting NQT

Other portscanning Find PHP port scanner: inurl:port