Adds support for parsing and respecting robots.txt during website collection.
This change introduces the following features:
- Fetches and parses /robots.txt before crawling a website.
- Respects `Disallow` patterns to avoid crawling restricted areas.
- Honors the `Crawl-delay` directive to prevent hammering sites.
- Adds command-line flags to configure the behavior:
- `--ignore-robots`: Ignores robots.txt rules.
- `--user-agent`: Sets a custom user-agent string.
- `--min-delay`: Overrides the crawl-delay with a minimum value.
The implementation includes a new `robots` package for parsing robots.txt files and integrates it into the existing website downloader. Tests have been added to verify the new functionality.
Co-authored-by: Snider <631881+Snider@users.noreply.github.com>
- Implements all placeholder Go examples in the `examples` directory.
- Corrects the `run_matrix_programmatically` example to use the `borg` package.
- Refactors the code to centralize the matrix execution logic in the `matrix` package.
- Updates the documentation to include a new "Programmatic Usage" section that describes all of the Go examples.
- Updates the "Terminal Isolation Matrix" section to remove manual 'runc' instructions, emphasizing that 'borg run' handles this process to maintain security and isolation.
- Adds missing examples for 'collect github repos', 'collect github release', and 'compile' commands to the documentation.
This commit adds placeholder Go programs to the `examples` directory for
all of Borg's features. This provides a clear roadmap for future
implementation and ensures that we have a testing strategy for each
function.
The new placeholder examples are:
- `examples/all`
- `examples/collect_github_release`
- `examples/collect_github_repo`
- `examples/collect_github_repos`
- `examples/collect_pwa`
- `examples/collect_website`
- `examples/serve`