Get Twitter Card To Work.

What do I add to my robots.txt file to allow my twitter card to render?  The page in question is www.gopherstateevents.com.

Thanks!
Bob SchneiderCo-OwnerAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

KimputerCommented:
That's a very warped way of thinking. You only look at your robots.txt file if twitter Cards AREN'T working.
Also, in case Cards isn't working and you're looking at your robots.txt file, you're not there to ADD anything, you're there to REMOVE the lines that are blocking twitter.

I just put your URL through the validator. So in your case, get your web accesslog, do some more validations, and you'll be able to recognize the robots.txt request, then remove those IP blocks from your robots.txt file
0
Lucas BishopClick TrackerCommented:
You've disallowed all bots except for a few (googlebot, slurp, etc), from crawling your site:
http://www.gopherstateevents.com/robots.txt
# ----------
# -- bingbot, microsoft indexer
User-agent: Bingbot
Disallow: /images
Disallow: /cgi
Crawl-delay: 10
# ----------
# -- msnbot, microsoft indexer
User-agent: msnbot
Disallow: /images
Disallow: /cgi
Crawl-delay: 10
# ----------
# -- Googlebot
User-agent: googlebot
Disallow: /images
Disallow: /cgi
User-agent: googlebot-image
Disallow: /
# ----------
# -- Slurp, Yahoo indexer
User-agent: slurp
Disallow: /images
Disallow: /cgi
Crawl-delay: 10
# ----------
# -- Default for all others
User-agent: *
Disallow: /

Open in new window


You need to allow Twitterbot to crawl:
https://developer.twitter.com/en/docs/tweets/optimize-with-cards/guides/getting-started

Twitter’s crawler respects Google’s robots.txt specification when scanning URLs. If a page with card markup is blocked, no card will be shown. If an image URL is blocked, no thumbnail or photo will be shown.

Twitter uses the User-Agent of Twitterbot (with version, such as Twitterbot/1.0), which can be used to create an exception in the robots.txt file.

User-agent: Twitterbot
Disallow:

Open in new window

0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Bob SchneiderCo-OwnerAuthor Commented:
Thanks Lucas!
1
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Web Development

From novice to tech pro — start learning today.