# robots.txt for https://nextinera.com/ # Next In Era Technologies User-agent: * # Allows all web crawlers to access the site. # You can specify rules for particular bots by naming them, e.g., User-agent: Googlebot # Allow crawling of all content by default Allow: / # Disallow specific files or directories if needed. # For a static site, you might not have many to disallow initially. # Examples (uncomment and modify if needed): # Disallow: /admin/ # If you had an admin directory # Disallow: /private_files/ # Disallow: /cgi-bin/ # Common directory to disallow # It's generally good practice to allow access to CSS, JS, and Images # as Google uses them to render pages. Allowing / implicitly does this, # but you could be explicit if you had a more restrictive Disallow above. # Allow: /css/ # Allow: /js/ # Allow: /images/ # Disallow the 404 error page from being crawled directly Disallow: /404.html # Or /error-404.html if that's the filename you used # Specify the location of the sitemap Sitemap: https://nextinera.com/sitemap.xml # You can add rules for specific bots if needed. For example: # User-agent: BadBot # Disallow: /