User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
If you want Google crawl your 1 Url and another search engine is not. Then please allow google bot there, please see below :
User-agent: Googlebot
Disallow: /
User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
A robots.txt file is a file at the root of your site that indicates those parts of your site you don't want accessed by search engine crawlers. ... web pages) robots.txt should only be used to control crawling traffic, typically ... At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.
Comments (10)
Full Marks Pvt Ltd
3
Publication House
User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
Natalie Gracia
2
Writer
Use disallow tool.The command used to tell a user-agent not to crawl particular URL. Only one "Disallow:" line is allowed for each URL.
Roose Aana
6
Australia assignment help
If you want Google crawl your 1 Url and another search engine is not. Then please allow google bot there, please see below :
User-agent: Googlebot
Disallow: /
Rahul Singh
3
I Like Writting
User-agent: Googlebot
Disallow: /
Coupon Sale
2
Latest Coupon Codes and Deals
Yes you can do this by allowing only googlebot and disallowing others.
User-aget: googlebot
Disllow: /
Sonera Jhaveri
7
Psychotherapist in Mumbai
thank You devid Young For this knowledge
Nityanand Tripathi
13
Senior Digital Marketing Executive
thank You devid Young For this knowledge
David Young
2
Sacer Shop
User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
TNPSC News
3
TNPSC Portal
A robots.txt file is a file at the root of your site that indicates those parts of your site you don't want accessed by search engine crawlers. ... web pages) robots.txt should only be used to control crawling traffic, typically ... At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.