Join Our Business Community
Unlock access to powerful business tools and connect with professionals in your industry!
As a member, you'll get:
-
✓
Professional Business Profile
-
✓
Article Publishing & Networking
-
✓
Access to NicheHub with AI Tools
-
✓
APSense Challenges & Rewards
"APSense 2025 sounds like a game-changer for businesses, emphasizing innovation and growth."
— Satisfied APSense Member
Comments (10)
Full Marks Pvt Ltd3
Publication House
User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
Natalie Gracia2
Writer
Use disallow tool.The command used to tell a user-agent not to crawl particular URL. Only one "Disallow:" line is allowed for each URL.
Roose Aana6
Australia assignment help
If you want Google crawl your 1 Url and another search engine is not. Then please allow google bot there, please see below :
User-agent: Googlebot
Disallow: /
Rahul Singh3
I Like Writting
User-agent: Googlebot
Disallow: /
Coupon Sale2
Latest Coupon Codes and Deals
Yes you can do this by allowing only googlebot and disallowing others.
User-aget: googlebot
Disllow: /
Sonera Jhaveri7
Psychotherapist in Mumbai
thank You devid Young For this knowledge
Nityanand Tripathi13
Senior Digital Marketing Executive
thank You devid Young For this knowledge
David Young2
Sacer Shop
User-agent: * -- this is all search engines;
User-agent: Googlebot --- this is google search engines;
You can write it like this:
User-agent: Googlebot
Disallow: /
TNPSC News3
TNPSC Portal
A robots.txt file is a file at the root of your site that indicates those parts of your site you don't want accessed by search engine crawlers. ... web pages) robots.txt should only be used to control crawling traffic, typically ... At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.