refactor: remove BugSETI and i18n tools (moved to own repos)
Remove internal/bugseti/ (now core/bugseti repo), cmd/bugseti/ (now core/bugseti/cmd/), and internal/tools/ (i18n-validate moved to core/go). core/cli internal/ is now empty. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
47e11e7861
commit
5f298275a7
70 changed files with 0 additions and 27393 deletions
31
cmd/bugseti/.gitignore
vendored
31
cmd/bugseti/.gitignore
vendored
|
|
@ -1,31 +0,0 @@
|
|||
# Build output
|
||||
bin/
|
||||
frontend/dist/
|
||||
frontend/node_modules/
|
||||
frontend/.angular/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Go
|
||||
*.exe
|
||||
*.exe~
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
|
||||
# Test
|
||||
*.test
|
||||
*.out
|
||||
coverage/
|
||||
|
||||
# Wails
|
||||
wails.json
|
||||
|
|
@ -1,186 +0,0 @@
|
|||
# BugSETI
|
||||
|
||||
**Distributed Bug Fixing - like SETI@home but for code**
|
||||
|
||||
BugSETI is a system tray application that helps developers contribute to open source by fixing bugs in their spare CPU cycles. It fetches issues from GitHub repositories, prepares context using AI, and guides you through the fix-and-submit workflow.
|
||||
|
||||
## Features
|
||||
|
||||
- **System Tray Integration**: Runs quietly in the background, ready when you are
|
||||
- **Issue Queue**: Automatically fetches and queues issues from configured repositories
|
||||
- **AI Context Seeding**: Prepares relevant code context for each issue using pattern matching
|
||||
- **Workbench UI**: Full-featured interface for reviewing issues and submitting fixes
|
||||
- **Automated PR Submission**: Streamlined workflow from fix to pull request
|
||||
- **Stats & Leaderboard**: Track your contributions and compete with the community
|
||||
|
||||
## Installation
|
||||
|
||||
### From Source
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://forge.lthn.ai/core/go.git
|
||||
cd core
|
||||
|
||||
# Build BugSETI
|
||||
task bugseti:build
|
||||
|
||||
# The binary will be in build/bin/bugseti
|
||||
```
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Go 1.25 or later
|
||||
- Node.js 18+ and npm (for frontend)
|
||||
- GitHub CLI (`gh`) authenticated
|
||||
- Chrome/Chromium (optional, for webview features)
|
||||
|
||||
## Configuration
|
||||
|
||||
On first launch, BugSETI will show an onboarding wizard to configure:
|
||||
|
||||
1. **GitHub Token**: For fetching issues and submitting PRs
|
||||
2. **Repositories**: Which repos to fetch issues from
|
||||
3. **Filters**: Issue labels, difficulty levels, languages
|
||||
4. **Notifications**: How to alert you about new issues
|
||||
|
||||
### Configuration File
|
||||
|
||||
Settings are stored in `~/.config/bugseti/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"github_token": "ghp_...",
|
||||
"repositories": [
|
||||
"host-uk/core",
|
||||
"example/repo"
|
||||
],
|
||||
"filters": {
|
||||
"labels": ["good first issue", "help wanted", "bug"],
|
||||
"languages": ["go", "typescript"],
|
||||
"max_age_days": 30
|
||||
},
|
||||
"notifications": {
|
||||
"enabled": true,
|
||||
"sound": true
|
||||
},
|
||||
"fetch_interval_minutes": 30
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Starting BugSETI
|
||||
|
||||
```bash
|
||||
# Run the application
|
||||
./bugseti
|
||||
|
||||
# Or use task runner
|
||||
task bugseti:run
|
||||
```
|
||||
|
||||
The app will appear in your system tray. Click the icon to see the quick menu or open the workbench.
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Browse Issues**: Click the tray icon to see available issues
|
||||
2. **Select an Issue**: Choose one to work on from the queue
|
||||
3. **Review Context**: BugSETI shows relevant files and patterns
|
||||
4. **Fix the Bug**: Make your changes in your preferred editor
|
||||
5. **Submit PR**: Use the workbench to create and submit your pull request
|
||||
|
||||
### Keyboard Shortcuts
|
||||
|
||||
| Shortcut | Action |
|
||||
|----------|--------|
|
||||
| `Ctrl+Shift+B` | Open workbench |
|
||||
| `Ctrl+Shift+N` | Next issue |
|
||||
| `Ctrl+Shift+S` | Submit PR |
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
cmd/bugseti/
|
||||
main.go # Application entry point
|
||||
tray.go # System tray service
|
||||
icons/ # Tray icons (light/dark/template)
|
||||
frontend/ # Angular frontend
|
||||
src/
|
||||
app/
|
||||
tray/ # Tray panel component
|
||||
workbench/ # Main workbench
|
||||
settings/ # Settings panel
|
||||
onboarding/ # First-run wizard
|
||||
|
||||
internal/bugseti/
|
||||
config.go # Configuration service
|
||||
fetcher.go # GitHub issue fetcher
|
||||
queue.go # Issue queue management
|
||||
seeder.go # Context seeding via AI
|
||||
submit.go # PR submission
|
||||
notify.go # Notification service
|
||||
stats.go # Statistics tracking
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions! Here's how to get involved:
|
||||
|
||||
### Development Setup
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd cmd/bugseti/frontend
|
||||
npm install
|
||||
|
||||
# Run in development mode
|
||||
task bugseti:dev
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Go tests
|
||||
go test ./cmd/bugseti/... ./internal/bugseti/...
|
||||
|
||||
# Frontend tests
|
||||
cd cmd/bugseti/frontend
|
||||
npm test
|
||||
```
|
||||
|
||||
### Submitting Changes
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch: `git checkout -b feature/my-feature`
|
||||
3. Make your changes and add tests
|
||||
4. Run the test suite: `task test`
|
||||
5. Submit a pull request
|
||||
|
||||
### Code Style
|
||||
|
||||
- Go: Follow standard Go conventions, run `go fmt`
|
||||
- TypeScript/Angular: Follow Angular style guide
|
||||
- Commits: Use conventional commit messages
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [ ] Auto-update mechanism
|
||||
- [ ] Team/organization support
|
||||
- [ ] Integration with more issue trackers (GitLab, Jira)
|
||||
- [ ] AI-assisted code review
|
||||
- [ ] Mobile companion app
|
||||
|
||||
## License
|
||||
|
||||
MIT License - see [LICENSE](../../LICENSE) for details.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- Inspired by SETI@home and distributed computing projects
|
||||
- Built with [Wails v3](https://wails.io/) for native desktop integration
|
||||
- Uses [Angular](https://angular.io/) for the frontend
|
||||
|
||||
---
|
||||
|
||||
**Happy Bug Hunting!**
|
||||
|
|
@ -1,134 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
includes:
|
||||
common: ./build/Taskfile.yml
|
||||
windows: ./build/windows/Taskfile.yml
|
||||
darwin: ./build/darwin/Taskfile.yml
|
||||
linux: ./build/linux/Taskfile.yml
|
||||
|
||||
vars:
|
||||
APP_NAME: "bugseti"
|
||||
BIN_DIR: "bin"
|
||||
VITE_PORT: '{{.WAILS_VITE_PORT | default 9246}}'
|
||||
|
||||
tasks:
|
||||
build:
|
||||
summary: Builds the application
|
||||
cmds:
|
||||
- task: "{{OS}}:build"
|
||||
|
||||
package:
|
||||
summary: Packages a production build of the application
|
||||
cmds:
|
||||
- task: "{{OS}}:package"
|
||||
|
||||
run:
|
||||
summary: Runs the application
|
||||
cmds:
|
||||
- task: "{{OS}}:run"
|
||||
|
||||
dev:
|
||||
summary: Runs the application in development mode
|
||||
cmds:
|
||||
- wails3 dev -config ./build/config.yml -port {{.VITE_PORT}}
|
||||
|
||||
build:all:
|
||||
summary: Builds for all platforms
|
||||
cmds:
|
||||
- task: darwin:build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
- task: linux:build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
- task: windows:build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
|
||||
package:all:
|
||||
summary: Packages for all platforms
|
||||
cmds:
|
||||
- task: darwin:package
|
||||
- task: linux:package
|
||||
- task: windows:package
|
||||
|
||||
clean:
|
||||
summary: Cleans build artifacts
|
||||
cmds:
|
||||
- rm -rf bin/
|
||||
- rm -rf frontend/dist/
|
||||
- rm -rf frontend/node_modules/
|
||||
|
||||
# Release targets
|
||||
release:stable:
|
||||
summary: Creates a stable release tag
|
||||
desc: |
|
||||
Creates a stable release tag (bugseti-vX.Y.Z).
|
||||
Usage: task release:stable VERSION=1.0.0
|
||||
preconditions:
|
||||
- sh: '[ -n "{{.VERSION}}" ]'
|
||||
msg: "VERSION is required. Usage: task release:stable VERSION=1.0.0"
|
||||
cmds:
|
||||
- git tag -a "bugseti-v{{.VERSION}}" -m "BugSETI v{{.VERSION}} stable release"
|
||||
- echo "Created tag bugseti-v{{.VERSION}}"
|
||||
- echo "To push: git push origin bugseti-v{{.VERSION}}"
|
||||
|
||||
release:beta:
|
||||
summary: Creates a beta release tag
|
||||
desc: |
|
||||
Creates a beta release tag (bugseti-vX.Y.Z-beta.N).
|
||||
Usage: task release:beta VERSION=1.0.0 BETA=1
|
||||
preconditions:
|
||||
- sh: '[ -n "{{.VERSION}}" ]'
|
||||
msg: "VERSION is required. Usage: task release:beta VERSION=1.0.0 BETA=1"
|
||||
- sh: '[ -n "{{.BETA}}" ]'
|
||||
msg: "BETA number is required. Usage: task release:beta VERSION=1.0.0 BETA=1"
|
||||
cmds:
|
||||
- git tag -a "bugseti-v{{.VERSION}}-beta.{{.BETA}}" -m "BugSETI v{{.VERSION}} beta {{.BETA}}"
|
||||
- echo "Created tag bugseti-v{{.VERSION}}-beta.{{.BETA}}"
|
||||
- echo "To push: git push origin bugseti-v{{.VERSION}}-beta.{{.BETA}}"
|
||||
|
||||
release:nightly:
|
||||
summary: Creates a nightly release tag
|
||||
desc: Creates a nightly release tag (bugseti-nightly-YYYYMMDD)
|
||||
vars:
|
||||
DATE:
|
||||
sh: date -u +%Y%m%d
|
||||
cmds:
|
||||
- git tag -a "bugseti-nightly-{{.DATE}}" -m "BugSETI nightly build {{.DATE}}"
|
||||
- echo "Created tag bugseti-nightly-{{.DATE}}"
|
||||
- echo "To push: git push origin bugseti-nightly-{{.DATE}}"
|
||||
|
||||
release:push:
|
||||
summary: Pushes the latest release tag
|
||||
desc: |
|
||||
Pushes the most recent bugseti-* tag to origin.
|
||||
Usage: task release:push
|
||||
vars:
|
||||
TAG:
|
||||
sh: git tag -l 'bugseti-*' | sort -V | tail -1
|
||||
preconditions:
|
||||
- sh: '[ -n "{{.TAG}}" ]'
|
||||
msg: "No bugseti-* tags found"
|
||||
cmds:
|
||||
- echo "Pushing tag {{.TAG}}..."
|
||||
- git push origin {{.TAG}}
|
||||
- echo "Tag {{.TAG}} pushed. GitHub Actions will build and release."
|
||||
|
||||
release:list:
|
||||
summary: Lists all BugSETI release tags
|
||||
cmds:
|
||||
- echo "=== BugSETI Release Tags ==="
|
||||
- git tag -l 'bugseti-*' | sort -V
|
||||
|
||||
version:
|
||||
summary: Shows current version info
|
||||
cmds:
|
||||
- |
|
||||
echo "=== BugSETI Version Info ==="
|
||||
echo "Latest stable tag:"
|
||||
git tag -l 'bugseti-v*' | grep -v beta | sort -V | tail -1 || echo " (none)"
|
||||
echo "Latest beta tag:"
|
||||
git tag -l 'bugseti-v*-beta.*' | sort -V | tail -1 || echo " (none)"
|
||||
echo "Latest nightly tag:"
|
||||
git tag -l 'bugseti-nightly-*' | sort -V | tail -1 || echo " (none)"
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
tasks:
|
||||
go:mod:tidy:
|
||||
summary: Runs `go mod tidy`
|
||||
internal: true
|
||||
cmds:
|
||||
- go mod tidy
|
||||
|
||||
install:frontend:deps:
|
||||
summary: Install frontend dependencies
|
||||
dir: frontend
|
||||
sources:
|
||||
- package.json
|
||||
- package-lock.json
|
||||
generates:
|
||||
- node_modules/*
|
||||
preconditions:
|
||||
- sh: npm version
|
||||
msg: "Looks like npm isn't installed. Npm is part of the Node installer: https://nodejs.org/en/download/"
|
||||
cmds:
|
||||
- npm install
|
||||
|
||||
build:frontend:
|
||||
label: build:frontend (PRODUCTION={{.PRODUCTION}})
|
||||
summary: Build the frontend project
|
||||
dir: frontend
|
||||
sources:
|
||||
- "**/*"
|
||||
generates:
|
||||
- dist/**/*
|
||||
deps:
|
||||
- task: install:frontend:deps
|
||||
- task: generate:bindings
|
||||
vars:
|
||||
BUILD_FLAGS:
|
||||
ref: .BUILD_FLAGS
|
||||
cmds:
|
||||
- npm run {{.BUILD_COMMAND}} -q
|
||||
env:
|
||||
PRODUCTION: '{{.PRODUCTION | default "false"}}'
|
||||
vars:
|
||||
BUILD_COMMAND: '{{if eq .PRODUCTION "true"}}build{{else}}build:dev{{end}}'
|
||||
|
||||
generate:bindings:
|
||||
label: generate:bindings (BUILD_FLAGS={{.BUILD_FLAGS}})
|
||||
summary: Generates bindings for the frontend
|
||||
deps:
|
||||
- task: go:mod:tidy
|
||||
sources:
|
||||
- "**/*.[jt]s"
|
||||
- exclude: frontend/**/*
|
||||
- frontend/bindings/**/*
|
||||
- "**/*.go"
|
||||
- go.mod
|
||||
- go.sum
|
||||
generates:
|
||||
- frontend/bindings/**/*
|
||||
cmds:
|
||||
- wails3 generate bindings -f '{{.BUILD_FLAGS}}' -clean=false -ts -i
|
||||
|
||||
generate:icons:
|
||||
summary: Generates Windows `.ico` and Mac `.icns` files from an image
|
||||
dir: build
|
||||
sources:
|
||||
- "appicon.png"
|
||||
generates:
|
||||
- "darwin/icons.icns"
|
||||
- "windows/icon.ico"
|
||||
cmds:
|
||||
- wails3 generate icons -input appicon.png -macfilename darwin/icons.icns -windowsfilename windows/icon.ico
|
||||
|
||||
dev:frontend:
|
||||
summary: Runs the frontend in development mode
|
||||
dir: frontend
|
||||
deps:
|
||||
- task: install:frontend:deps
|
||||
cmds:
|
||||
- npm run dev -- --port {{.VITE_PORT}}
|
||||
vars:
|
||||
VITE_PORT: '{{.VITE_PORT | default "5173"}}'
|
||||
|
||||
update:build-assets:
|
||||
summary: Updates the build assets
|
||||
dir: build
|
||||
preconditions:
|
||||
- sh: '[ -n "{{.APP_NAME}}" ]'
|
||||
msg: "APP_NAME variable is required"
|
||||
cmds:
|
||||
- wails3 update build-assets -name "{{.APP_NAME}}" -binaryname "{{.APP_NAME}}" -config config.yml -dir .
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
# BugSETI Wails v3 Build Configuration
|
||||
version: '3'
|
||||
|
||||
# Build metadata
|
||||
info:
|
||||
companyName: "Lethean"
|
||||
productName: "BugSETI"
|
||||
productIdentifier: "io.lethean.bugseti"
|
||||
description: "Distributed Bug Fixing - like SETI@home but for code"
|
||||
copyright: "Copyright 2026 Lethean"
|
||||
comments: "Distributed OSS bug fixing application"
|
||||
version: "0.1.0"
|
||||
|
||||
# Dev mode configuration
|
||||
dev_mode:
|
||||
root_path: .
|
||||
log_level: warn
|
||||
debounce: 1000
|
||||
ignore:
|
||||
dir:
|
||||
- .git
|
||||
- node_modules
|
||||
- frontend
|
||||
- bin
|
||||
file:
|
||||
- .DS_Store
|
||||
- .gitignore
|
||||
- .gitkeep
|
||||
watched_extension:
|
||||
- "*.go"
|
||||
git_ignore: true
|
||||
executes:
|
||||
- cmd: go build -buildvcs=false -gcflags=all=-l -o bin/bugseti .
|
||||
type: blocking
|
||||
- cmd: cd frontend && npx ng serve --port ${WAILS_FRONTEND_PORT:-9246}
|
||||
type: background
|
||||
- cmd: bin/bugseti
|
||||
type: primary
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>APPL</string>
|
||||
<key>CFBundleName</key>
|
||||
<string>BugSETI (Dev)</string>
|
||||
<key>CFBundleExecutable</key>
|
||||
<string>bugseti</string>
|
||||
<key>CFBundleIdentifier</key>
|
||||
<string>io.lethean.bugseti.dev</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.1.0-dev</string>
|
||||
<key>CFBundleGetInfoString</key>
|
||||
<string>Distributed Bug Fixing - like SETI@home but for code (Development)</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.1.0-dev</string>
|
||||
<key>CFBundleIconFile</key>
|
||||
<string>icons.icns</string>
|
||||
<key>LSMinimumSystemVersion</key>
|
||||
<string>10.15.0</string>
|
||||
<key>NSHighResolutionCapable</key>
|
||||
<true/>
|
||||
<key>LSUIElement</key>
|
||||
<true/>
|
||||
<key>LSApplicationCategoryType</key>
|
||||
<string>public.app-category.developer-tools</string>
|
||||
<key>NSAppTransportSecurity</key>
|
||||
<dict>
|
||||
<key>NSAllowsLocalNetworking</key>
|
||||
<true/>
|
||||
<key>NSAllowsArbitraryLoads</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>APPL</string>
|
||||
<key>CFBundleName</key>
|
||||
<string>BugSETI</string>
|
||||
<key>CFBundleExecutable</key>
|
||||
<string>bugseti</string>
|
||||
<key>CFBundleIdentifier</key>
|
||||
<string>io.lethean.bugseti</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.1.0</string>
|
||||
<key>CFBundleGetInfoString</key>
|
||||
<string>Distributed Bug Fixing - like SETI@home but for code</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.1.0</string>
|
||||
<key>CFBundleIconFile</key>
|
||||
<string>icons.icns</string>
|
||||
<key>LSMinimumSystemVersion</key>
|
||||
<string>10.15.0</string>
|
||||
<key>NSHighResolutionCapable</key>
|
||||
<true/>
|
||||
<key>LSUIElement</key>
|
||||
<true/>
|
||||
<key>LSApplicationCategoryType</key>
|
||||
<string>public.app-category.developer-tools</string>
|
||||
<key>NSAppTransportSecurity</key>
|
||||
<dict>
|
||||
<key>NSAllowsLocalNetworking</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
|
|
@ -1,84 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
includes:
|
||||
common: ../Taskfile.yml
|
||||
|
||||
tasks:
|
||||
build:
|
||||
summary: Creates a production build of the application
|
||||
deps:
|
||||
- task: common:go:mod:tidy
|
||||
- task: common:build:frontend
|
||||
vars:
|
||||
BUILD_FLAGS:
|
||||
ref: .BUILD_FLAGS
|
||||
PRODUCTION:
|
||||
ref: .PRODUCTION
|
||||
- task: common:generate:icons
|
||||
cmds:
|
||||
- go build {{.BUILD_FLAGS}} -o {{.OUTPUT}}
|
||||
vars:
|
||||
BUILD_FLAGS: '{{if eq .PRODUCTION "true"}}-tags production -trimpath -buildvcs=false -ldflags="-w -s"{{else}}-buildvcs=false -gcflags=all="-l"{{end}}'
|
||||
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
|
||||
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
|
||||
env:
|
||||
GOOS: darwin
|
||||
CGO_ENABLED: 1
|
||||
GOARCH: '{{.ARCH | default ARCH}}'
|
||||
CGO_CFLAGS: "-mmacosx-version-min=10.15"
|
||||
CGO_LDFLAGS: "-mmacosx-version-min=10.15"
|
||||
MACOSX_DEPLOYMENT_TARGET: "10.15"
|
||||
PRODUCTION: '{{.PRODUCTION | default "false"}}'
|
||||
|
||||
build:universal:
|
||||
summary: Builds darwin universal binary (arm64 + amd64)
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
ARCH: amd64
|
||||
OUTPUT: "{{.BIN_DIR}}/{{.APP_NAME}}-amd64"
|
||||
PRODUCTION: '{{.PRODUCTION | default "true"}}'
|
||||
- task: build
|
||||
vars:
|
||||
ARCH: arm64
|
||||
OUTPUT: "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
|
||||
PRODUCTION: '{{.PRODUCTION | default "true"}}'
|
||||
cmds:
|
||||
- lipo -create -output "{{.BIN_DIR}}/{{.APP_NAME}}" "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
|
||||
- rm "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
|
||||
|
||||
package:
|
||||
summary: Packages a production build of the application into a `.app` bundle
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
cmds:
|
||||
- task: create:app:bundle
|
||||
|
||||
package:universal:
|
||||
summary: Packages darwin universal binary (arm64 + amd64)
|
||||
deps:
|
||||
- task: build:universal
|
||||
cmds:
|
||||
- task: create:app:bundle
|
||||
|
||||
create:app:bundle:
|
||||
summary: Creates an `.app` bundle
|
||||
cmds:
|
||||
- mkdir -p {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/{MacOS,Resources}
|
||||
- cp build/darwin/icons.icns {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/Resources
|
||||
- cp {{.BIN_DIR}}/{{.APP_NAME}} {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/MacOS
|
||||
- cp build/darwin/Info.plist {{.BIN_DIR}}/{{.APP_NAME}}.app/Contents
|
||||
- codesign --force --deep --sign - {{.BIN_DIR}}/{{.APP_NAME}}.app
|
||||
|
||||
run:
|
||||
deps:
|
||||
- task: build
|
||||
cmds:
|
||||
- mkdir -p {{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/{MacOS,Resources}
|
||||
- cp build/darwin/icons.icns {{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Resources
|
||||
- cp {{.BIN_DIR}}/{{.APP_NAME}} {{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/MacOS
|
||||
- cp build/darwin/Info.dev.plist {{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Info.plist
|
||||
- codesign --force --deep --sign - {{.BIN_DIR}}/{{.APP_NAME}}.dev.app
|
||||
- '{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/MacOS/{{.APP_NAME}}'
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
includes:
|
||||
common: ../Taskfile.yml
|
||||
|
||||
tasks:
|
||||
build:
|
||||
summary: Builds the application for Linux
|
||||
deps:
|
||||
- task: common:go:mod:tidy
|
||||
- task: common:build:frontend
|
||||
vars:
|
||||
BUILD_FLAGS:
|
||||
ref: .BUILD_FLAGS
|
||||
PRODUCTION:
|
||||
ref: .PRODUCTION
|
||||
- task: common:generate:icons
|
||||
cmds:
|
||||
- go build {{.BUILD_FLAGS}} -o {{.BIN_DIR}}/{{.APP_NAME}}
|
||||
vars:
|
||||
BUILD_FLAGS: '{{if eq .PRODUCTION "true"}}-tags production -trimpath -buildvcs=false -ldflags="-w -s"{{else}}-buildvcs=false -gcflags=all="-l"{{end}}'
|
||||
env:
|
||||
GOOS: linux
|
||||
CGO_ENABLED: 1
|
||||
GOARCH: '{{.ARCH | default ARCH}}'
|
||||
PRODUCTION: '{{.PRODUCTION | default "false"}}'
|
||||
|
||||
package:
|
||||
summary: Packages a production build of the application for Linux
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
cmds:
|
||||
- task: create:appimage
|
||||
- task: create:deb
|
||||
- task: create:rpm
|
||||
|
||||
create:appimage:
|
||||
summary: Creates an AppImage
|
||||
dir: build/linux/appimage
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
- task: generate:dotdesktop
|
||||
cmds:
|
||||
- cp {{.APP_BINARY}} {{.APP_NAME}}
|
||||
- cp ../../appicon.png {{.APP_NAME}}.png
|
||||
- wails3 generate appimage -binary {{.APP_NAME}} -icon {{.ICON}} -desktopfile {{.DESKTOP_FILE}} -outputdir {{.OUTPUT_DIR}} -builddir {{.ROOT_DIR}}/build/linux/appimage/build
|
||||
vars:
|
||||
APP_NAME: '{{.APP_NAME}}'
|
||||
APP_BINARY: '../../../bin/{{.APP_NAME}}'
|
||||
ICON: '{{.APP_NAME}}.png'
|
||||
DESKTOP_FILE: '../{{.APP_NAME}}.desktop'
|
||||
OUTPUT_DIR: '../../../bin'
|
||||
|
||||
create:deb:
|
||||
summary: Creates a deb package
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
cmds:
|
||||
- task: generate:dotdesktop
|
||||
- task: generate:deb
|
||||
|
||||
create:rpm:
|
||||
summary: Creates a rpm package
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
cmds:
|
||||
- task: generate:dotdesktop
|
||||
- task: generate:rpm
|
||||
|
||||
generate:deb:
|
||||
summary: Creates a deb package
|
||||
cmds:
|
||||
- wails3 tool package -name {{.APP_NAME}} -format deb -config ./build/linux/nfpm/nfpm.yaml -out {{.ROOT_DIR}}/bin
|
||||
|
||||
generate:rpm:
|
||||
summary: Creates a rpm package
|
||||
cmds:
|
||||
- wails3 tool package -name {{.APP_NAME}} -format rpm -config ./build/linux/nfpm/nfpm.yaml -out {{.ROOT_DIR}}/bin
|
||||
|
||||
generate:dotdesktop:
|
||||
summary: Generates a `.desktop` file
|
||||
dir: build
|
||||
cmds:
|
||||
- mkdir -p {{.ROOT_DIR}}/build/linux/appimage
|
||||
- wails3 generate .desktop -name "{{.APP_NAME}}" -exec "{{.EXEC}}" -icon "{{.ICON}}" -outputfile {{.ROOT_DIR}}/build/linux/{{.APP_NAME}}.desktop -categories "{{.CATEGORIES}}"
|
||||
vars:
|
||||
APP_NAME: 'BugSETI'
|
||||
EXEC: '{{.APP_NAME}}'
|
||||
ICON: 'bugseti'
|
||||
CATEGORIES: 'Development;'
|
||||
OUTPUTFILE: '{{.ROOT_DIR}}/build/linux/{{.APP_NAME}}.desktop'
|
||||
|
||||
run:
|
||||
cmds:
|
||||
- '{{.BIN_DIR}}/{{.APP_NAME}}'
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
# nfpm configuration for BugSETI
|
||||
name: "bugseti"
|
||||
arch: "${GOARCH}"
|
||||
platform: "linux"
|
||||
version: "0.1.0"
|
||||
section: "devel"
|
||||
priority: "optional"
|
||||
maintainer: "Lethean <developers@lethean.io>"
|
||||
description: |
|
||||
BugSETI - Distributed Bug Fixing
|
||||
Like SETI@home but for code. Install the system tray app,
|
||||
it pulls OSS issues from GitHub, AI prepares context,
|
||||
you fix bugs, and it auto-submits PRs.
|
||||
vendor: "Lethean"
|
||||
homepage: "https://forge.lthn.ai/core/go"
|
||||
license: "MIT"
|
||||
|
||||
contents:
|
||||
- src: ./bin/bugseti
|
||||
dst: /usr/bin/bugseti
|
||||
- src: ./build/linux/bugseti.desktop
|
||||
dst: /usr/share/applications/bugseti.desktop
|
||||
- src: ./build/appicon.png
|
||||
dst: /usr/share/icons/hicolor/256x256/apps/bugseti.png
|
||||
|
||||
overrides:
|
||||
deb:
|
||||
dependencies:
|
||||
- libwebkit2gtk-4.1-0
|
||||
- libgtk-3-0
|
||||
rpm:
|
||||
dependencies:
|
||||
- webkit2gtk4.1
|
||||
- gtk3
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
includes:
|
||||
common: ../Taskfile.yml
|
||||
|
||||
tasks:
|
||||
build:
|
||||
summary: Builds the application for Windows
|
||||
deps:
|
||||
- task: common:go:mod:tidy
|
||||
- task: common:build:frontend
|
||||
vars:
|
||||
BUILD_FLAGS:
|
||||
ref: .BUILD_FLAGS
|
||||
PRODUCTION:
|
||||
ref: .PRODUCTION
|
||||
- task: common:generate:icons
|
||||
cmds:
|
||||
- go build {{.BUILD_FLAGS}} -o {{.BIN_DIR}}/{{.APP_NAME}}.exe
|
||||
vars:
|
||||
BUILD_FLAGS: '{{if eq .PRODUCTION "true"}}-tags production -trimpath -buildvcs=false -ldflags="-w -s -H windowsgui"{{else}}-buildvcs=false -gcflags=all="-l"{{end}}'
|
||||
env:
|
||||
GOOS: windows
|
||||
CGO_ENABLED: 1
|
||||
GOARCH: '{{.ARCH | default ARCH}}'
|
||||
PRODUCTION: '{{.PRODUCTION | default "false"}}'
|
||||
|
||||
package:
|
||||
summary: Packages a production build of the application for Windows
|
||||
deps:
|
||||
- task: build
|
||||
vars:
|
||||
PRODUCTION: "true"
|
||||
cmds:
|
||||
- task: create:nsis
|
||||
|
||||
create:nsis:
|
||||
summary: Creates an NSIS installer
|
||||
cmds:
|
||||
- wails3 tool package -name {{.APP_NAME}} -format nsis -config ./build/windows/nsis/installer.nsi -out {{.ROOT_DIR}}/bin
|
||||
|
||||
create:msi:
|
||||
summary: Creates an MSI installer
|
||||
cmds:
|
||||
- wails3 tool package -name {{.APP_NAME}} -format msi -config ./build/windows/wix/main.wxs -out {{.ROOT_DIR}}/bin
|
||||
|
||||
run:
|
||||
cmds:
|
||||
- '{{.BIN_DIR}}/{{.APP_NAME}}.exe'
|
||||
|
|
@ -1,94 +0,0 @@
|
|||
{
|
||||
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
|
||||
"version": 1,
|
||||
"newProjectRoot": "projects",
|
||||
"projects": {
|
||||
"bugseti": {
|
||||
"projectType": "application",
|
||||
"schematics": {
|
||||
"@schematics/angular:component": {
|
||||
"style": "scss",
|
||||
"standalone": true
|
||||
}
|
||||
},
|
||||
"root": "",
|
||||
"sourceRoot": "src",
|
||||
"prefix": "app",
|
||||
"architect": {
|
||||
"build": {
|
||||
"builder": "@angular-devkit/build-angular:application",
|
||||
"options": {
|
||||
"outputPath": "dist/bugseti",
|
||||
"index": "src/index.html",
|
||||
"browser": "src/main.ts",
|
||||
"polyfills": ["zone.js"],
|
||||
"tsConfig": "tsconfig.app.json",
|
||||
"inlineStyleLanguage": "scss",
|
||||
"assets": [
|
||||
"src/favicon.ico",
|
||||
"src/assets"
|
||||
],
|
||||
"styles": [
|
||||
"src/styles.scss"
|
||||
],
|
||||
"scripts": []
|
||||
},
|
||||
"configurations": {
|
||||
"production": {
|
||||
"budgets": [
|
||||
{
|
||||
"type": "initial",
|
||||
"maximumWarning": "500kb",
|
||||
"maximumError": "1mb"
|
||||
},
|
||||
{
|
||||
"type": "anyComponentStyle",
|
||||
"maximumWarning": "6kb",
|
||||
"maximumError": "10kb"
|
||||
}
|
||||
],
|
||||
"outputHashing": "all"
|
||||
},
|
||||
"development": {
|
||||
"optimization": false,
|
||||
"extractLicenses": false,
|
||||
"sourceMap": true
|
||||
}
|
||||
},
|
||||
"defaultConfiguration": "production"
|
||||
},
|
||||
"serve": {
|
||||
"builder": "@angular-devkit/build-angular:dev-server",
|
||||
"configurations": {
|
||||
"production": {
|
||||
"buildTarget": "bugseti:build:production"
|
||||
},
|
||||
"development": {
|
||||
"buildTarget": "bugseti:build:development"
|
||||
}
|
||||
},
|
||||
"defaultConfiguration": "development"
|
||||
},
|
||||
"test": {
|
||||
"builder": "@angular-devkit/build-angular:karma",
|
||||
"options": {
|
||||
"polyfills": ["zone.js", "zone.js/testing"],
|
||||
"tsConfig": "tsconfig.spec.json",
|
||||
"inlineStyleLanguage": "scss",
|
||||
"assets": [
|
||||
"src/favicon.ico",
|
||||
"src/assets"
|
||||
],
|
||||
"styles": [
|
||||
"src/styles.scss"
|
||||
],
|
||||
"scripts": []
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"cli": {
|
||||
"analytics": false
|
||||
}
|
||||
}
|
||||
15012
cmd/bugseti/frontend/package-lock.json
generated
15012
cmd/bugseti/frontend/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
|
@ -1,41 +0,0 @@
|
|||
{
|
||||
"name": "bugseti",
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"ng": "ng",
|
||||
"start": "ng serve",
|
||||
"dev": "ng serve --configuration development",
|
||||
"build": "ng build --configuration production",
|
||||
"build:dev": "ng build --configuration development",
|
||||
"watch": "ng build --watch --configuration development",
|
||||
"test": "ng test",
|
||||
"lint": "ng lint"
|
||||
},
|
||||
"dependencies": {
|
||||
"@angular/animations": "^19.1.0",
|
||||
"@angular/common": "^19.1.0",
|
||||
"@angular/compiler": "^19.1.0",
|
||||
"@angular/core": "^19.1.0",
|
||||
"@angular/forms": "^19.1.0",
|
||||
"@angular/platform-browser": "^19.1.0",
|
||||
"@angular/platform-browser-dynamic": "^19.1.0",
|
||||
"@angular/router": "^19.1.0",
|
||||
"rxjs": "~7.8.0",
|
||||
"tslib": "^2.3.0",
|
||||
"zone.js": "~0.15.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@angular-devkit/build-angular": "^19.1.0",
|
||||
"@angular/cli": "^21.1.2",
|
||||
"@angular/compiler-cli": "^19.1.0",
|
||||
"@types/jasmine": "~5.1.0",
|
||||
"jasmine-core": "~5.1.0",
|
||||
"karma": "~6.4.0",
|
||||
"karma-chrome-launcher": "~3.2.0",
|
||||
"karma-coverage": "~2.2.0",
|
||||
"karma-jasmine": "~5.1.0",
|
||||
"karma-jasmine-html-reporter": "~2.1.0",
|
||||
"typescript": "~5.5.2"
|
||||
}
|
||||
}
|
||||
|
|
@ -1,18 +0,0 @@
|
|||
import { Component } from '@angular/core';
|
||||
import { RouterOutlet } from '@angular/router';
|
||||
|
||||
@Component({
|
||||
selector: 'app-root',
|
||||
standalone: true,
|
||||
imports: [RouterOutlet],
|
||||
template: '<router-outlet></router-outlet>',
|
||||
styles: [`
|
||||
:host {
|
||||
display: block;
|
||||
height: 100%;
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class AppComponent {
|
||||
title = 'BugSETI';
|
||||
}
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
import { ApplicationConfig } from '@angular/core';
|
||||
import { provideRouter, withHashLocation } from '@angular/router';
|
||||
import { routes } from './app.routes';
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [
|
||||
provideRouter(routes, withHashLocation())
|
||||
]
|
||||
};
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
import { Routes } from '@angular/router';
|
||||
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: '',
|
||||
redirectTo: 'tray',
|
||||
pathMatch: 'full'
|
||||
},
|
||||
{
|
||||
path: 'tray',
|
||||
loadComponent: () => import('./tray/tray.component').then(m => m.TrayComponent)
|
||||
},
|
||||
{
|
||||
path: 'workbench',
|
||||
loadComponent: () => import('./workbench/workbench.component').then(m => m.WorkbenchComponent)
|
||||
},
|
||||
{
|
||||
path: 'settings',
|
||||
loadComponent: () => import('./settings/settings.component').then(m => m.SettingsComponent)
|
||||
},
|
||||
{
|
||||
path: 'onboarding',
|
||||
loadComponent: () => import('./onboarding/onboarding.component').then(m => m.OnboardingComponent)
|
||||
},
|
||||
{
|
||||
path: 'jellyfin',
|
||||
loadComponent: () => import('./jellyfin/jellyfin.component').then(m => m.JellyfinComponent)
|
||||
}
|
||||
];
|
||||
|
|
@ -1,189 +0,0 @@
|
|||
import { Component } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
import { FormsModule } from '@angular/forms';
|
||||
import { DomSanitizer, SafeResourceUrl } from '@angular/platform-browser';
|
||||
|
||||
type Mode = 'web' | 'stream';
|
||||
|
||||
@Component({
|
||||
selector: 'app-jellyfin',
|
||||
standalone: true,
|
||||
imports: [CommonModule, FormsModule],
|
||||
template: `
|
||||
<div class="jellyfin">
|
||||
<header class="jellyfin__header">
|
||||
<div>
|
||||
<h1>Jellyfin Player</h1>
|
||||
<p class="text-muted">Quick embed for media.lthn.ai or any Jellyfin host.</p>
|
||||
</div>
|
||||
<div class="mode-switch">
|
||||
<button class="btn btn--secondary" [class.is-active]="mode === 'web'" (click)="mode = 'web'">Web</button>
|
||||
<button class="btn btn--secondary" [class.is-active]="mode === 'stream'" (click)="mode = 'stream'">Stream</button>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="card jellyfin__config">
|
||||
<div class="form-group">
|
||||
<label class="form-label">Jellyfin Server URL</label>
|
||||
<input class="form-input" [(ngModel)]="serverUrl" placeholder="https://media.lthn.ai" />
|
||||
</div>
|
||||
|
||||
<div *ngIf="mode === 'stream'" class="stream-grid">
|
||||
<div class="form-group">
|
||||
<label class="form-label">Item ID</label>
|
||||
<input class="form-input" [(ngModel)]="itemId" placeholder="Jellyfin library item ID" />
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label class="form-label">API Key</label>
|
||||
<input class="form-input" [(ngModel)]="apiKey" placeholder="Jellyfin API key" />
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label class="form-label">Media Source ID (optional)</label>
|
||||
<input class="form-input" [(ngModel)]="mediaSourceId" placeholder="Source ID for multi-source items" />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="actions">
|
||||
<button class="btn btn--primary" (click)="load()">Load Player</button>
|
||||
<button class="btn btn--secondary" (click)="reset()">Reset</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="card jellyfin__viewer" *ngIf="loaded && mode === 'web'">
|
||||
<iframe
|
||||
class="jellyfin-frame"
|
||||
title="Jellyfin Web"
|
||||
[src]="safeWebUrl"
|
||||
loading="lazy"
|
||||
referrerpolicy="no-referrer"
|
||||
></iframe>
|
||||
</div>
|
||||
|
||||
<div class="card jellyfin__viewer" *ngIf="loaded && mode === 'stream'">
|
||||
<video class="jellyfin-video" controls [src]="streamUrl"></video>
|
||||
<p class="text-muted stream-hint" *ngIf="!streamUrl">Set Item ID and API key to build stream URL.</p>
|
||||
</div>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.jellyfin {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--spacing-md);
|
||||
padding: var(--spacing-md);
|
||||
height: 100%;
|
||||
overflow: auto;
|
||||
background: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.jellyfin__header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: var(--spacing-md);
|
||||
}
|
||||
|
||||
.jellyfin__header h1 {
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.mode-switch {
|
||||
display: flex;
|
||||
gap: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.mode-switch .btn.is-active {
|
||||
border-color: var(--accent-primary);
|
||||
color: var(--accent-primary);
|
||||
}
|
||||
|
||||
.jellyfin__config {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.stream-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(280px, 1fr));
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.actions {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.jellyfin__viewer {
|
||||
flex: 1;
|
||||
min-height: 420px;
|
||||
padding: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.jellyfin-frame,
|
||||
.jellyfin-video {
|
||||
border: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
min-height: 420px;
|
||||
background: #000;
|
||||
}
|
||||
|
||||
.stream-hint {
|
||||
padding: var(--spacing-md);
|
||||
margin: 0;
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class JellyfinComponent {
|
||||
mode: Mode = 'web';
|
||||
loaded = false;
|
||||
|
||||
serverUrl = 'https://media.lthn.ai';
|
||||
itemId = '';
|
||||
apiKey = '';
|
||||
mediaSourceId = '';
|
||||
|
||||
safeWebUrl!: SafeResourceUrl;
|
||||
streamUrl = '';
|
||||
|
||||
constructor(private sanitizer: DomSanitizer) {
|
||||
this.safeWebUrl = this.sanitizer.bypassSecurityTrustResourceUrl('https://media.lthn.ai/web/index.html');
|
||||
}
|
||||
|
||||
load(): void {
|
||||
const base = this.normalizeBase(this.serverUrl);
|
||||
this.safeWebUrl = this.sanitizer.bypassSecurityTrustResourceUrl(`${base}/web/index.html`);
|
||||
this.streamUrl = this.buildStreamUrl(base);
|
||||
this.loaded = true;
|
||||
}
|
||||
|
||||
reset(): void {
|
||||
this.loaded = false;
|
||||
this.itemId = '';
|
||||
this.apiKey = '';
|
||||
this.mediaSourceId = '';
|
||||
this.streamUrl = '';
|
||||
}
|
||||
|
||||
private normalizeBase(value: string): string {
|
||||
const raw = value.trim() || 'https://media.lthn.ai';
|
||||
const withProtocol = raw.startsWith('http://') || raw.startsWith('https://') ? raw : `https://${raw}`;
|
||||
return withProtocol.replace(/\/+$/, '');
|
||||
}
|
||||
|
||||
private buildStreamUrl(base: string): string {
|
||||
if (!this.itemId.trim() || !this.apiKey.trim()) {
|
||||
return '';
|
||||
}
|
||||
|
||||
const url = new URL(`${base}/Videos/${encodeURIComponent(this.itemId.trim())}/stream`);
|
||||
url.searchParams.set('api_key', this.apiKey.trim());
|
||||
url.searchParams.set('static', 'true');
|
||||
if (this.mediaSourceId.trim()) {
|
||||
url.searchParams.set('MediaSourceId', this.mediaSourceId.trim());
|
||||
}
|
||||
return url.toString();
|
||||
}
|
||||
}
|
||||
|
|
@ -1,457 +0,0 @@
|
|||
import { Component } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
import { FormsModule } from '@angular/forms';
|
||||
|
||||
@Component({
|
||||
selector: 'app-onboarding',
|
||||
standalone: true,
|
||||
imports: [CommonModule, FormsModule],
|
||||
template: `
|
||||
<div class="onboarding">
|
||||
<div class="onboarding-content">
|
||||
<!-- Step 1: Welcome -->
|
||||
<div class="step" *ngIf="step === 1">
|
||||
<div class="step-icon">B</div>
|
||||
<h1>Welcome to BugSETI</h1>
|
||||
<p class="subtitle">Distributed Bug Fixing - like SETI@home but for code</p>
|
||||
|
||||
<div class="feature-list">
|
||||
<div class="feature">
|
||||
<span class="feature-icon">[1]</span>
|
||||
<div>
|
||||
<strong>Find Issues</strong>
|
||||
<p>We pull beginner-friendly issues from OSS projects you care about.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="feature">
|
||||
<span class="feature-icon">[2]</span>
|
||||
<div>
|
||||
<strong>Get Context</strong>
|
||||
<p>AI prepares relevant context to help you understand each issue.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="feature">
|
||||
<span class="feature-icon">[3]</span>
|
||||
<div>
|
||||
<strong>Submit PRs</strong>
|
||||
<p>Fix bugs and submit PRs with minimal friction.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<button class="btn btn--primary btn--lg" (click)="nextStep()">Get Started</button>
|
||||
</div>
|
||||
|
||||
<!-- Step 2: GitHub Auth -->
|
||||
<div class="step" *ngIf="step === 2">
|
||||
<h2>Connect GitHub</h2>
|
||||
<p>BugSETI uses the GitHub CLI (gh) to interact with repositories.</p>
|
||||
|
||||
<div class="auth-status" [class.auth-success]="ghAuthenticated">
|
||||
<span class="status-icon">{{ ghAuthenticated ? '[OK]' : '[!]' }}</span>
|
||||
<span>{{ ghAuthenticated ? 'GitHub CLI authenticated' : 'GitHub CLI not detected' }}</span>
|
||||
</div>
|
||||
|
||||
<div class="auth-instructions" *ngIf="!ghAuthenticated">
|
||||
<p>To authenticate with GitHub CLI, run:</p>
|
||||
<code>gh auth login</code>
|
||||
<p class="note">After authenticating, click "Check Again".</p>
|
||||
</div>
|
||||
|
||||
<div class="step-actions">
|
||||
<button class="btn btn--secondary" (click)="checkGhAuth()">Check Again</button>
|
||||
<button class="btn btn--primary" (click)="nextStep()" [disabled]="!ghAuthenticated">Continue</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Step 3: Select Repos -->
|
||||
<div class="step" *ngIf="step === 3">
|
||||
<h2>Choose Repositories</h2>
|
||||
<p>Add repositories you want to contribute to.</p>
|
||||
|
||||
<div class="repo-input">
|
||||
<input type="text" class="form-input" [(ngModel)]="newRepo"
|
||||
placeholder="owner/repo (e.g., facebook/react)">
|
||||
<button class="btn btn--secondary" (click)="addRepo()" [disabled]="!newRepo">Add</button>
|
||||
</div>
|
||||
|
||||
<div class="selected-repos" *ngIf="selectedRepos.length">
|
||||
<h3>Selected Repositories</h3>
|
||||
<div class="repo-chip" *ngFor="let repo of selectedRepos; let i = index">
|
||||
{{ repo }}
|
||||
<button class="repo-remove" (click)="removeRepo(i)">x</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="suggested-repos">
|
||||
<h3>Suggested Repositories</h3>
|
||||
<div class="suggested-list">
|
||||
<button class="suggestion" *ngFor="let repo of suggestedRepos" (click)="addSuggested(repo)">
|
||||
{{ repo }}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="step-actions">
|
||||
<button class="btn btn--secondary" (click)="prevStep()">Back</button>
|
||||
<button class="btn btn--primary" (click)="nextStep()" [disabled]="selectedRepos.length === 0">Continue</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Step 4: Complete -->
|
||||
<div class="step" *ngIf="step === 4">
|
||||
<div class="complete-icon">[OK]</div>
|
||||
<h2>You're All Set!</h2>
|
||||
<p>BugSETI is ready to help you contribute to open source.</p>
|
||||
|
||||
<div class="summary">
|
||||
<p><strong>{{ selectedRepos.length }}</strong> repositories selected</p>
|
||||
<p>Looking for issues with these labels:</p>
|
||||
<div class="label-list">
|
||||
<span class="badge badge--primary">good first issue</span>
|
||||
<span class="badge badge--primary">help wanted</span>
|
||||
<span class="badge badge--primary">beginner-friendly</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<button class="btn btn--success btn--lg" (click)="complete()">Start Finding Issues</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="step-indicators">
|
||||
<span class="indicator" [class.active]="step >= 1" [class.current]="step === 1"></span>
|
||||
<span class="indicator" [class.active]="step >= 2" [class.current]="step === 2"></span>
|
||||
<span class="indicator" [class.active]="step >= 3" [class.current]="step === 3"></span>
|
||||
<span class="indicator" [class.active]="step >= 4" [class.current]="step === 4"></span>
|
||||
</div>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.onboarding {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100%;
|
||||
background-color: var(--bg-primary);
|
||||
}
|
||||
|
||||
.onboarding-content {
|
||||
flex: 1;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.step {
|
||||
max-width: 500px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.step-icon, .complete-icon {
|
||||
width: 80px;
|
||||
height: 80px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
margin: 0 auto var(--spacing-lg);
|
||||
background: linear-gradient(135deg, var(--accent-primary), var(--accent-success));
|
||||
border-radius: var(--radius-lg);
|
||||
font-size: 32px;
|
||||
font-weight: bold;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.complete-icon {
|
||||
background: var(--accent-success);
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 28px;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 24px;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.feature-list {
|
||||
text-align: left;
|
||||
margin-bottom: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.feature {
|
||||
display: flex;
|
||||
gap: var(--spacing-md);
|
||||
margin-bottom: var(--spacing-md);
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
}
|
||||
|
||||
.feature-icon {
|
||||
font-family: var(--font-mono);
|
||||
color: var(--accent-primary);
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.feature strong {
|
||||
display: block;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.feature p {
|
||||
color: var(--text-secondary);
|
||||
font-size: 13px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.auth-status {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: var(--spacing-sm);
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-tertiary);
|
||||
border-radius: var(--radius-md);
|
||||
margin: var(--spacing-lg) 0;
|
||||
}
|
||||
|
||||
.auth-status.auth-success {
|
||||
background-color: rgba(63, 185, 80, 0.15);
|
||||
color: var(--accent-success);
|
||||
}
|
||||
|
||||
.status-icon {
|
||||
font-family: var(--font-mono);
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.auth-instructions {
|
||||
text-align: left;
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
}
|
||||
|
||||
.auth-instructions code {
|
||||
display: block;
|
||||
margin: var(--spacing-md) 0;
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-tertiary);
|
||||
}
|
||||
|
||||
.auth-instructions .note {
|
||||
color: var(--text-muted);
|
||||
font-size: 13px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.step-actions {
|
||||
display: flex;
|
||||
gap: var(--spacing-md);
|
||||
justify-content: center;
|
||||
margin-top: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.repo-input {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.repo-input .form-input {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.selected-repos, .suggested-repos {
|
||||
text-align: left;
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.selected-repos h3, .suggested-repos h3 {
|
||||
font-size: 12px;
|
||||
text-transform: uppercase;
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.repo-chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-xs);
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
margin-right: var(--spacing-xs);
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.repo-remove {
|
||||
background: none;
|
||||
border: none;
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.suggested-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.suggestion {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
background-color: var(--bg-tertiary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-md);
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.suggestion:hover {
|
||||
background-color: var(--bg-secondary);
|
||||
border-color: var(--accent-primary);
|
||||
}
|
||||
|
||||
.summary {
|
||||
padding: var(--spacing-lg);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
margin-bottom: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.summary p {
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.label-list {
|
||||
display: flex;
|
||||
gap: var(--spacing-xs);
|
||||
justify-content: center;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.step-indicators {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
gap: var(--spacing-sm);
|
||||
padding: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.indicator {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
background-color: var(--border-color);
|
||||
}
|
||||
|
||||
.indicator.active {
|
||||
background-color: var(--accent-primary);
|
||||
}
|
||||
|
||||
.indicator.current {
|
||||
width: 24px;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.btn--lg {
|
||||
padding: var(--spacing-md) var(--spacing-xl);
|
||||
font-size: 16px;
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class OnboardingComponent {
|
||||
step = 1;
|
||||
ghAuthenticated = false;
|
||||
newRepo = '';
|
||||
selectedRepos: string[] = [];
|
||||
suggestedRepos = [
|
||||
'facebook/react',
|
||||
'microsoft/vscode',
|
||||
'golang/go',
|
||||
'kubernetes/kubernetes',
|
||||
'rust-lang/rust',
|
||||
'angular/angular',
|
||||
'nodejs/node',
|
||||
'python/cpython'
|
||||
];
|
||||
|
||||
ngOnInit() {
|
||||
this.checkGhAuth();
|
||||
}
|
||||
|
||||
nextStep() {
|
||||
if (this.step < 4) {
|
||||
this.step++;
|
||||
}
|
||||
}
|
||||
|
||||
prevStep() {
|
||||
if (this.step > 1) {
|
||||
this.step--;
|
||||
}
|
||||
}
|
||||
|
||||
async checkGhAuth() {
|
||||
try {
|
||||
// Check if gh CLI is authenticated
|
||||
// In a real implementation, this would call the backend
|
||||
this.ghAuthenticated = true; // Assume authenticated for demo
|
||||
} catch (err) {
|
||||
this.ghAuthenticated = false;
|
||||
}
|
||||
}
|
||||
|
||||
addRepo() {
|
||||
if (this.newRepo && !this.selectedRepos.includes(this.newRepo)) {
|
||||
this.selectedRepos.push(this.newRepo);
|
||||
this.newRepo = '';
|
||||
}
|
||||
}
|
||||
|
||||
removeRepo(index: number) {
|
||||
this.selectedRepos.splice(index, 1);
|
||||
}
|
||||
|
||||
addSuggested(repo: string) {
|
||||
if (!this.selectedRepos.includes(repo)) {
|
||||
this.selectedRepos.push(repo);
|
||||
}
|
||||
}
|
||||
|
||||
async complete() {
|
||||
try {
|
||||
// Save repos to config
|
||||
if ((window as any).go?.main?.ConfigService?.SetConfig) {
|
||||
const config = await (window as any).go.main.ConfigService.GetConfig() || {};
|
||||
config.watchedRepos = this.selectedRepos;
|
||||
await (window as any).go.main.ConfigService.SetConfig(config);
|
||||
}
|
||||
|
||||
// Mark onboarding as complete
|
||||
if ((window as any).go?.main?.TrayService?.CompleteOnboarding) {
|
||||
await (window as any).go.main.TrayService.CompleteOnboarding();
|
||||
}
|
||||
|
||||
// Close onboarding window and start fetching
|
||||
if ((window as any).wails?.Window) {
|
||||
(window as any).wails.Window.GetByName('onboarding').then((w: any) => w.Hide());
|
||||
}
|
||||
|
||||
// Start fetching
|
||||
if ((window as any).go?.main?.TrayService?.StartFetching) {
|
||||
await (window as any).go.main.TrayService.StartFetching();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to complete onboarding:', err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,407 +0,0 @@
|
|||
import { Component, OnInit } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
import { FormsModule } from '@angular/forms';
|
||||
|
||||
interface Config {
|
||||
watchedRepos: string[];
|
||||
labels: string[];
|
||||
fetchIntervalMinutes: number;
|
||||
notificationsEnabled: boolean;
|
||||
notificationSound: boolean;
|
||||
workspaceDir: string;
|
||||
marketplaceMcpRoot: string;
|
||||
theme: string;
|
||||
autoSeedContext: boolean;
|
||||
workHours?: {
|
||||
enabled: boolean;
|
||||
startHour: number;
|
||||
endHour: number;
|
||||
days: number[];
|
||||
timezone: string;
|
||||
};
|
||||
}
|
||||
|
||||
@Component({
|
||||
selector: 'app-settings',
|
||||
standalone: true,
|
||||
imports: [CommonModule, FormsModule],
|
||||
template: `
|
||||
<div class="settings">
|
||||
<header class="settings-header">
|
||||
<h1>Settings</h1>
|
||||
<button class="btn btn--primary" (click)="saveSettings()">Save</button>
|
||||
</header>
|
||||
|
||||
<div class="settings-content">
|
||||
<section class="settings-section">
|
||||
<h2>Repositories</h2>
|
||||
<p class="section-description">Add GitHub repositories to watch for issues.</p>
|
||||
|
||||
<div class="repo-list">
|
||||
<div class="repo-item" *ngFor="let repo of config.watchedRepos; let i = index">
|
||||
<span>{{ repo }}</span>
|
||||
<button class="btn btn--danger btn--sm" (click)="removeRepo(i)">Remove</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="add-repo">
|
||||
<input type="text" class="form-input" [(ngModel)]="newRepo"
|
||||
placeholder="owner/repo (e.g., facebook/react)">
|
||||
<button class="btn btn--secondary" (click)="addRepo()" [disabled]="!newRepo">Add</button>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Issue Labels</h2>
|
||||
<p class="section-description">Filter issues by these labels.</p>
|
||||
|
||||
<div class="label-list">
|
||||
<span class="label-chip" *ngFor="let label of config.labels; let i = index">
|
||||
{{ label }}
|
||||
<button class="label-remove" (click)="removeLabel(i)">x</button>
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div class="add-label">
|
||||
<input type="text" class="form-input" [(ngModel)]="newLabel"
|
||||
placeholder="Add label (e.g., good first issue)">
|
||||
<button class="btn btn--secondary" (click)="addLabel()" [disabled]="!newLabel">Add</button>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Fetch Settings</h2>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Fetch Interval (minutes)</label>
|
||||
<input type="number" class="form-input" [(ngModel)]="config.fetchIntervalMinutes" min="5" max="120">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="checkbox-label">
|
||||
<input type="checkbox" [(ngModel)]="config.autoSeedContext">
|
||||
<span>Auto-prepare AI context for issues</span>
|
||||
</label>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Work Hours</h2>
|
||||
<p class="section-description">Only fetch issues during these hours.</p>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="checkbox-label">
|
||||
<input type="checkbox" [(ngModel)]="config.workHours!.enabled">
|
||||
<span>Enable work hours</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="work-hours-config" *ngIf="config.workHours?.enabled">
|
||||
<div class="form-group">
|
||||
<label class="form-label">Start Hour</label>
|
||||
<select class="form-select" [(ngModel)]="config.workHours!.startHour">
|
||||
<option *ngFor="let h of hours" [value]="h">{{ h }}:00</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">End Hour</label>
|
||||
<select class="form-select" [(ngModel)]="config.workHours!.endHour">
|
||||
<option *ngFor="let h of hours" [value]="h">{{ h }}:00</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Days</label>
|
||||
<div class="day-checkboxes">
|
||||
<label class="checkbox-label" *ngFor="let day of days; let i = index">
|
||||
<input type="checkbox" [checked]="isDaySelected(i)" (change)="toggleDay(i)">
|
||||
<span>{{ day }}</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Notifications</h2>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="checkbox-label">
|
||||
<input type="checkbox" [(ngModel)]="config.notificationsEnabled">
|
||||
<span>Enable desktop notifications</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="checkbox-label">
|
||||
<input type="checkbox" [(ngModel)]="config.notificationSound">
|
||||
<span>Play notification sounds</span>
|
||||
</label>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Appearance</h2>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Theme</label>
|
||||
<select class="form-select" [(ngModel)]="config.theme">
|
||||
<option value="dark">Dark</option>
|
||||
<option value="light">Light</option>
|
||||
<option value="system">System</option>
|
||||
</select>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="settings-section">
|
||||
<h2>Storage</h2>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Workspace Directory</label>
|
||||
<input type="text" class="form-input" [(ngModel)]="config.workspaceDir"
|
||||
placeholder="Leave empty for default">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Marketplace MCP Root</label>
|
||||
<input type="text" class="form-input" [(ngModel)]="config.marketplaceMcpRoot"
|
||||
placeholder="Path to core-agent (optional)">
|
||||
<p class="section-description">Override the marketplace MCP root. Leave empty to auto-detect.</p>
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.settings {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100%;
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.settings-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: var(--spacing-md) var(--spacing-lg);
|
||||
background-color: var(--bg-primary);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.settings-header h1 {
|
||||
font-size: 18px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.settings-content {
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
padding: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.settings-section {
|
||||
background-color: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--spacing-lg);
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.settings-section h2 {
|
||||
font-size: 16px;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.section-description {
|
||||
color: var(--text-muted);
|
||||
font-size: 13px;
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.repo-list, .label-list {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.repo-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: var(--spacing-sm);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.add-repo, .add-label {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.add-repo .form-input, .add-label .form-input {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.label-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.label-chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-xs);
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
background-color: var(--bg-tertiary);
|
||||
border-radius: 999px;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.label-remove {
|
||||
background: none;
|
||||
border: none;
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
padding: 0;
|
||||
font-size: 14px;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.label-remove:hover {
|
||||
color: var(--accent-danger);
|
||||
}
|
||||
|
||||
.checkbox-label {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-sm);
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.checkbox-label input[type="checkbox"] {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
}
|
||||
|
||||
.work-hours-config {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: var(--spacing-md);
|
||||
margin-top: var(--spacing-md);
|
||||
}
|
||||
|
||||
.day-checkboxes {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.day-checkboxes .checkbox-label {
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.btn--sm {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: 12px;
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class SettingsComponent implements OnInit {
|
||||
config: Config = {
|
||||
watchedRepos: [],
|
||||
labels: ['good first issue', 'help wanted'],
|
||||
fetchIntervalMinutes: 15,
|
||||
notificationsEnabled: true,
|
||||
notificationSound: true,
|
||||
workspaceDir: '',
|
||||
marketplaceMcpRoot: '',
|
||||
theme: 'dark',
|
||||
autoSeedContext: true,
|
||||
workHours: {
|
||||
enabled: false,
|
||||
startHour: 9,
|
||||
endHour: 17,
|
||||
days: [1, 2, 3, 4, 5],
|
||||
timezone: ''
|
||||
}
|
||||
};
|
||||
|
||||
newRepo = '';
|
||||
newLabel = '';
|
||||
hours = Array.from({ length: 24 }, (_, i) => i);
|
||||
days = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'];
|
||||
|
||||
ngOnInit() {
|
||||
this.loadConfig();
|
||||
}
|
||||
|
||||
async loadConfig() {
|
||||
try {
|
||||
if ((window as any).go?.main?.ConfigService?.GetConfig) {
|
||||
this.config = await (window as any).go.main.ConfigService.GetConfig();
|
||||
if (!this.config.workHours) {
|
||||
this.config.workHours = {
|
||||
enabled: false,
|
||||
startHour: 9,
|
||||
endHour: 17,
|
||||
days: [1, 2, 3, 4, 5],
|
||||
timezone: ''
|
||||
};
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load config:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async saveSettings() {
|
||||
try {
|
||||
if ((window as any).go?.main?.ConfigService?.SetConfig) {
|
||||
await (window as any).go.main.ConfigService.SetConfig(this.config);
|
||||
alert('Settings saved!');
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to save config:', err);
|
||||
alert('Failed to save settings.');
|
||||
}
|
||||
}
|
||||
|
||||
addRepo() {
|
||||
if (this.newRepo && !this.config.watchedRepos.includes(this.newRepo)) {
|
||||
this.config.watchedRepos.push(this.newRepo);
|
||||
this.newRepo = '';
|
||||
}
|
||||
}
|
||||
|
||||
removeRepo(index: number) {
|
||||
this.config.watchedRepos.splice(index, 1);
|
||||
}
|
||||
|
||||
addLabel() {
|
||||
if (this.newLabel && !this.config.labels.includes(this.newLabel)) {
|
||||
this.config.labels.push(this.newLabel);
|
||||
this.newLabel = '';
|
||||
}
|
||||
}
|
||||
|
||||
removeLabel(index: number) {
|
||||
this.config.labels.splice(index, 1);
|
||||
}
|
||||
|
||||
isDaySelected(day: number): boolean {
|
||||
return this.config.workHours?.days.includes(day) || false;
|
||||
}
|
||||
|
||||
toggleDay(day: number) {
|
||||
if (!this.config.workHours) return;
|
||||
|
||||
const index = this.config.workHours.days.indexOf(day);
|
||||
if (index === -1) {
|
||||
this.config.workHours.days.push(day);
|
||||
} else {
|
||||
this.config.workHours.days.splice(index, 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,556 +0,0 @@
|
|||
import { Component, OnInit, OnDestroy } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
import { FormsModule } from '@angular/forms';
|
||||
|
||||
interface UpdateSettings {
|
||||
channel: string;
|
||||
autoUpdate: boolean;
|
||||
checkInterval: number;
|
||||
lastCheck: string;
|
||||
}
|
||||
|
||||
interface VersionInfo {
|
||||
version: string;
|
||||
channel: string;
|
||||
commit: string;
|
||||
buildTime: string;
|
||||
goVersion: string;
|
||||
os: string;
|
||||
arch: string;
|
||||
}
|
||||
|
||||
interface ChannelInfo {
|
||||
id: string;
|
||||
name: string;
|
||||
description: string;
|
||||
}
|
||||
|
||||
interface UpdateCheckResult {
|
||||
available: boolean;
|
||||
currentVersion: string;
|
||||
latestVersion: string;
|
||||
release?: {
|
||||
version: string;
|
||||
channel: string;
|
||||
tag: string;
|
||||
name: string;
|
||||
body: string;
|
||||
publishedAt: string;
|
||||
htmlUrl: string;
|
||||
};
|
||||
error?: string;
|
||||
checkedAt: string;
|
||||
}
|
||||
|
||||
@Component({
|
||||
selector: 'app-updates-settings',
|
||||
standalone: true,
|
||||
imports: [CommonModule, FormsModule],
|
||||
template: `
|
||||
<div class="updates-settings">
|
||||
<div class="current-version">
|
||||
<div class="version-badge">
|
||||
<span class="version-number">{{ versionInfo?.version || 'Unknown' }}</span>
|
||||
<span class="channel-badge" [class]="'channel-' + (versionInfo?.channel || 'dev')">
|
||||
{{ versionInfo?.channel || 'dev' }}
|
||||
</span>
|
||||
</div>
|
||||
<p class="build-info" *ngIf="versionInfo">
|
||||
Built {{ versionInfo.buildTime | date:'medium' }} ({{ versionInfo.commit?.substring(0, 7) }})
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="update-check" *ngIf="checkResult">
|
||||
<div class="update-available" *ngIf="checkResult.available">
|
||||
<div class="update-icon">!</div>
|
||||
<div class="update-info">
|
||||
<h4>Update Available</h4>
|
||||
<p>Version {{ checkResult.latestVersion }} is available</p>
|
||||
<a *ngIf="checkResult.release?.htmlUrl"
|
||||
[href]="checkResult.release.htmlUrl"
|
||||
target="_blank"
|
||||
class="release-link">
|
||||
View Release Notes
|
||||
</a>
|
||||
</div>
|
||||
<button class="btn btn--primary" (click)="installUpdate()" [disabled]="isInstalling">
|
||||
{{ isInstalling ? 'Installing...' : 'Install Update' }}
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div class="up-to-date" *ngIf="!checkResult.available && !checkResult.error">
|
||||
<div class="check-icon">OK</div>
|
||||
<div class="check-info">
|
||||
<h4>Up to Date</h4>
|
||||
<p>You're running the latest version</p>
|
||||
<span class="last-check" *ngIf="checkResult.checkedAt">
|
||||
Last checked: {{ checkResult.checkedAt | date:'short' }}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="check-error" *ngIf="checkResult.error">
|
||||
<div class="error-icon">X</div>
|
||||
<div class="error-info">
|
||||
<h4>Check Failed</h4>
|
||||
<p>{{ checkResult.error }}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="check-button-row">
|
||||
<button class="btn btn--secondary" (click)="checkForUpdates()" [disabled]="isChecking">
|
||||
{{ isChecking ? 'Checking...' : 'Check for Updates' }}
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div class="settings-section">
|
||||
<h3>Update Channel</h3>
|
||||
<p class="section-description">Choose which release channel to follow for updates.</p>
|
||||
|
||||
<div class="channel-options">
|
||||
<label class="channel-option" *ngFor="let channel of channels"
|
||||
[class.selected]="settings.channel === channel.id">
|
||||
<input type="radio"
|
||||
[name]="'channel'"
|
||||
[value]="channel.id"
|
||||
[(ngModel)]="settings.channel"
|
||||
(change)="onSettingsChange()">
|
||||
<div class="channel-content">
|
||||
<span class="channel-name">{{ channel.name }}</span>
|
||||
<span class="channel-desc">{{ channel.description }}</span>
|
||||
</div>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="settings-section">
|
||||
<h3>Automatic Updates</h3>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="checkbox-label">
|
||||
<input type="checkbox"
|
||||
[(ngModel)]="settings.autoUpdate"
|
||||
(change)="onSettingsChange()">
|
||||
<span>Automatically install updates</span>
|
||||
</label>
|
||||
<p class="setting-hint">When enabled, updates will be installed automatically on app restart.</p>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Check Interval</label>
|
||||
<select class="form-select"
|
||||
[(ngModel)]="settings.checkInterval"
|
||||
(change)="onSettingsChange()">
|
||||
<option [value]="0">Disabled</option>
|
||||
<option [value]="1">Every hour</option>
|
||||
<option [value]="6">Every 6 hours</option>
|
||||
<option [value]="12">Every 12 hours</option>
|
||||
<option [value]="24">Daily</option>
|
||||
<option [value]="168">Weekly</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="save-status" *ngIf="saveMessage">
|
||||
<span [class.error]="saveError">{{ saveMessage }}</span>
|
||||
</div>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.updates-settings {
|
||||
padding: var(--spacing-md);
|
||||
}
|
||||
|
||||
.current-version {
|
||||
background: var(--bg-tertiary);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--spacing-lg);
|
||||
margin-bottom: var(--spacing-lg);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.version-badge {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: var(--spacing-sm);
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.version-number {
|
||||
font-size: 24px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.channel-badge {
|
||||
padding: 2px 8px;
|
||||
border-radius: 999px;
|
||||
font-size: 11px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.channel-stable { background: var(--accent-success); color: white; }
|
||||
.channel-beta { background: var(--accent-warning); color: black; }
|
||||
.channel-nightly { background: var(--accent-purple, #8b5cf6); color: white; }
|
||||
.channel-dev { background: var(--text-muted); color: var(--bg-primary); }
|
||||
|
||||
.build-info {
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.update-check {
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.update-available, .up-to-date, .check-error {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-md);
|
||||
padding: var(--spacing-md);
|
||||
border-radius: var(--radius-md);
|
||||
}
|
||||
|
||||
.update-available {
|
||||
background: var(--accent-warning-bg, rgba(245, 158, 11, 0.1));
|
||||
border: 1px solid var(--accent-warning);
|
||||
}
|
||||
|
||||
.up-to-date {
|
||||
background: var(--accent-success-bg, rgba(34, 197, 94, 0.1));
|
||||
border: 1px solid var(--accent-success);
|
||||
}
|
||||
|
||||
.check-error {
|
||||
background: var(--accent-danger-bg, rgba(239, 68, 68, 0.1));
|
||||
border: 1px solid var(--accent-danger);
|
||||
}
|
||||
|
||||
.update-icon, .check-icon, .error-icon {
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
border-radius: 50%;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-weight: bold;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.update-icon { background: var(--accent-warning); color: black; }
|
||||
.check-icon { background: var(--accent-success); color: white; }
|
||||
.error-icon { background: var(--accent-danger); color: white; }
|
||||
|
||||
.update-info, .check-info, .error-info {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.update-info h4, .check-info h4, .error-info h4 {
|
||||
margin: 0 0 var(--spacing-xs) 0;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.update-info p, .check-info p, .error-info p {
|
||||
margin: 0;
|
||||
font-size: 13px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.release-link {
|
||||
color: var(--accent-primary);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.last-check {
|
||||
font-size: 11px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.check-button-row {
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.settings-section {
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--spacing-lg);
|
||||
margin-bottom: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.settings-section h3 {
|
||||
font-size: 14px;
|
||||
margin: 0 0 var(--spacing-xs) 0;
|
||||
}
|
||||
|
||||
.section-description {
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.channel-options {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.channel-option {
|
||||
display: flex;
|
||||
align-items: flex-start;
|
||||
gap: var(--spacing-sm);
|
||||
padding: var(--spacing-md);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-md);
|
||||
cursor: pointer;
|
||||
transition: all 0.15s ease;
|
||||
}
|
||||
|
||||
.channel-option:hover {
|
||||
border-color: var(--accent-primary);
|
||||
}
|
||||
|
||||
.channel-option.selected {
|
||||
border-color: var(--accent-primary);
|
||||
background: var(--accent-primary-bg, rgba(59, 130, 246, 0.1));
|
||||
}
|
||||
|
||||
.channel-option input[type="radio"] {
|
||||
margin-top: 2px;
|
||||
}
|
||||
|
||||
.channel-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 2px;
|
||||
}
|
||||
|
||||
.channel-name {
|
||||
font-weight: 500;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.channel-desc {
|
||||
font-size: 12px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.form-group {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.form-group:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.checkbox-label {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-sm);
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.setting-hint {
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
margin: var(--spacing-xs) 0 0 24px;
|
||||
}
|
||||
|
||||
.form-label {
|
||||
display: block;
|
||||
font-size: 13px;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.form-select {
|
||||
width: 100%;
|
||||
padding: var(--spacing-sm);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-md);
|
||||
background: var(--bg-secondary);
|
||||
color: var(--text-primary);
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.save-status {
|
||||
text-align: center;
|
||||
font-size: 13px;
|
||||
color: var(--accent-success);
|
||||
}
|
||||
|
||||
.save-status .error {
|
||||
color: var(--accent-danger);
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
border: none;
|
||||
border-radius: var(--radius-md);
|
||||
font-size: 14px;
|
||||
cursor: pointer;
|
||||
transition: all 0.15s ease;
|
||||
}
|
||||
|
||||
.btn:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.btn--primary {
|
||||
background: var(--accent-primary);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn--primary:hover:not(:disabled) {
|
||||
background: var(--accent-primary-hover, #2563eb);
|
||||
}
|
||||
|
||||
.btn--secondary {
|
||||
background: var(--bg-tertiary);
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.btn--secondary:hover:not(:disabled) {
|
||||
background: var(--bg-secondary);
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class UpdatesComponent implements OnInit, OnDestroy {
|
||||
settings: UpdateSettings = {
|
||||
channel: 'stable',
|
||||
autoUpdate: false,
|
||||
checkInterval: 6,
|
||||
lastCheck: ''
|
||||
};
|
||||
|
||||
versionInfo: VersionInfo | null = null;
|
||||
checkResult: UpdateCheckResult | null = null;
|
||||
|
||||
channels: ChannelInfo[] = [
|
||||
{ id: 'stable', name: 'Stable', description: 'Production releases - most stable, recommended for most users' },
|
||||
{ id: 'beta', name: 'Beta', description: 'Pre-release builds - new features being tested before stable release' },
|
||||
{ id: 'nightly', name: 'Nightly', description: 'Latest development builds - bleeding edge, may be unstable' }
|
||||
];
|
||||
|
||||
isChecking = false;
|
||||
isInstalling = false;
|
||||
saveMessage = '';
|
||||
saveError = false;
|
||||
|
||||
private saveTimeout: ReturnType<typeof setTimeout> | null = null;
|
||||
|
||||
ngOnInit() {
|
||||
this.loadSettings();
|
||||
this.loadVersionInfo();
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
if (this.saveTimeout) {
|
||||
clearTimeout(this.saveTimeout);
|
||||
}
|
||||
}
|
||||
|
||||
async loadSettings() {
|
||||
try {
|
||||
const wails = (window as any).go?.main;
|
||||
if (wails?.UpdateService?.GetSettings) {
|
||||
this.settings = await wails.UpdateService.GetSettings();
|
||||
} else if (wails?.ConfigService?.GetUpdateSettings) {
|
||||
this.settings = await wails.ConfigService.GetUpdateSettings();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load update settings:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async loadVersionInfo() {
|
||||
try {
|
||||
const wails = (window as any).go?.main;
|
||||
if (wails?.VersionService?.GetVersionInfo) {
|
||||
this.versionInfo = await wails.VersionService.GetVersionInfo();
|
||||
} else if (wails?.UpdateService?.GetVersionInfo) {
|
||||
this.versionInfo = await wails.UpdateService.GetVersionInfo();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load version info:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async checkForUpdates() {
|
||||
this.isChecking = true;
|
||||
this.checkResult = null;
|
||||
|
||||
try {
|
||||
const wails = (window as any).go?.main;
|
||||
if (wails?.UpdateService?.CheckForUpdate) {
|
||||
this.checkResult = await wails.UpdateService.CheckForUpdate();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to check for updates:', err);
|
||||
this.checkResult = {
|
||||
available: false,
|
||||
currentVersion: this.versionInfo?.version || 'unknown',
|
||||
latestVersion: '',
|
||||
error: 'Failed to check for updates',
|
||||
checkedAt: new Date().toISOString()
|
||||
};
|
||||
} finally {
|
||||
this.isChecking = false;
|
||||
}
|
||||
}
|
||||
|
||||
async installUpdate() {
|
||||
if (!this.checkResult?.available || !this.checkResult.release) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.isInstalling = true;
|
||||
|
||||
try {
|
||||
const wails = (window as any).go?.main;
|
||||
if (wails?.UpdateService?.InstallUpdate) {
|
||||
await wails.UpdateService.InstallUpdate();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to install update:', err);
|
||||
alert('Failed to install update. Please try again or download manually.');
|
||||
} finally {
|
||||
this.isInstalling = false;
|
||||
}
|
||||
}
|
||||
|
||||
async onSettingsChange() {
|
||||
// Debounce save
|
||||
if (this.saveTimeout) {
|
||||
clearTimeout(this.saveTimeout);
|
||||
}
|
||||
|
||||
this.saveTimeout = setTimeout(() => this.saveSettings(), 500);
|
||||
}
|
||||
|
||||
async saveSettings() {
|
||||
try {
|
||||
const wails = (window as any).go?.main;
|
||||
if (wails?.UpdateService?.SetSettings) {
|
||||
await wails.UpdateService.SetSettings(this.settings);
|
||||
} else if (wails?.ConfigService?.SetUpdateSettings) {
|
||||
await wails.ConfigService.SetUpdateSettings(this.settings);
|
||||
}
|
||||
this.saveMessage = 'Settings saved';
|
||||
this.saveError = false;
|
||||
} catch (err) {
|
||||
console.error('Failed to save update settings:', err);
|
||||
this.saveMessage = 'Failed to save settings';
|
||||
this.saveError = true;
|
||||
}
|
||||
|
||||
// Clear message after 2 seconds
|
||||
setTimeout(() => {
|
||||
this.saveMessage = '';
|
||||
}, 2000);
|
||||
}
|
||||
}
|
||||
|
|
@ -1,303 +0,0 @@
|
|||
import { Component, OnInit, OnDestroy } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
|
||||
interface TrayStatus {
|
||||
running: boolean;
|
||||
currentIssue: string;
|
||||
queueSize: number;
|
||||
issuesFixed: number;
|
||||
prsMerged: number;
|
||||
}
|
||||
|
||||
@Component({
|
||||
selector: 'app-tray',
|
||||
standalone: true,
|
||||
imports: [CommonModule],
|
||||
template: `
|
||||
<div class="tray-panel">
|
||||
<header class="tray-header">
|
||||
<div class="logo">
|
||||
<span class="logo-icon">B</span>
|
||||
<span class="logo-text">BugSETI</span>
|
||||
</div>
|
||||
<span class="badge" [class.badge--success]="status.running" [class.badge--warning]="!status.running">
|
||||
{{ status.running ? 'Running' : 'Paused' }}
|
||||
</span>
|
||||
</header>
|
||||
|
||||
<section class="stats-grid">
|
||||
<div class="stat-card">
|
||||
<span class="stat-value">{{ status.queueSize }}</span>
|
||||
<span class="stat-label">In Queue</span>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<span class="stat-value">{{ status.issuesFixed }}</span>
|
||||
<span class="stat-label">Fixed</span>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<span class="stat-value">{{ status.prsMerged }}</span>
|
||||
<span class="stat-label">Merged</span>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="current-issue" *ngIf="status.currentIssue">
|
||||
<h3>Current Issue</h3>
|
||||
<div class="issue-card">
|
||||
<p class="issue-title">{{ status.currentIssue }}</p>
|
||||
<div class="issue-actions">
|
||||
<button class="btn btn--primary btn--sm" (click)="openWorkbench()">
|
||||
Open Workbench
|
||||
</button>
|
||||
<button class="btn btn--secondary btn--sm" (click)="skipIssue()">
|
||||
Skip
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="current-issue" *ngIf="!status.currentIssue">
|
||||
<div class="empty-state">
|
||||
<span class="empty-icon">[ ]</span>
|
||||
<p>No issue in progress</p>
|
||||
<button class="btn btn--primary btn--sm" (click)="nextIssue()" [disabled]="status.queueSize === 0">
|
||||
Get Next Issue
|
||||
</button>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<footer class="tray-footer">
|
||||
<button class="btn btn--secondary btn--sm" (click)="openJellyfin()">
|
||||
Jellyfin
|
||||
</button>
|
||||
<button class="btn btn--secondary btn--sm" (click)="toggleRunning()">
|
||||
{{ status.running ? 'Pause' : 'Start' }}
|
||||
</button>
|
||||
<button class="btn btn--secondary btn--sm" (click)="openSettings()">
|
||||
Settings
|
||||
</button>
|
||||
</footer>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.tray-panel {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100%;
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-primary);
|
||||
}
|
||||
|
||||
.tray-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.logo {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.logo-icon {
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
background: linear-gradient(135deg, var(--accent-primary), var(--accent-success));
|
||||
border-radius: var(--radius-md);
|
||||
font-weight: bold;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.logo-text {
|
||||
font-weight: 600;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(3, 1fr);
|
||||
gap: var(--spacing-sm);
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.stat-card {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
padding: var(--spacing-sm);
|
||||
background-color: var(--bg-secondary);
|
||||
border-radius: var(--radius-md);
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 24px;
|
||||
font-weight: bold;
|
||||
color: var(--accent-primary);
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
font-size: 11px;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.current-issue {
|
||||
flex: 1;
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.current-issue h3 {
|
||||
font-size: 12px;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.issue-card {
|
||||
background-color: var(--bg-secondary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-md);
|
||||
padding: var(--spacing-md);
|
||||
}
|
||||
|
||||
.issue-title {
|
||||
font-size: 13px;
|
||||
line-height: 1.4;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.issue-actions {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: var(--spacing-xl);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.empty-icon {
|
||||
font-size: 32px;
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.empty-state p {
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.tray-footer {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.btn--sm {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: 12px;
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class TrayComponent implements OnInit, OnDestroy {
|
||||
status: TrayStatus = {
|
||||
running: false,
|
||||
currentIssue: '',
|
||||
queueSize: 0,
|
||||
issuesFixed: 0,
|
||||
prsMerged: 0
|
||||
};
|
||||
|
||||
private refreshInterval?: ReturnType<typeof setInterval>;
|
||||
|
||||
ngOnInit() {
|
||||
this.loadStatus();
|
||||
this.refreshInterval = setInterval(() => this.loadStatus(), 5000);
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
if (this.refreshInterval) {
|
||||
clearInterval(this.refreshInterval);
|
||||
}
|
||||
}
|
||||
|
||||
async loadStatus() {
|
||||
try {
|
||||
// Call Wails binding when available
|
||||
if ((window as any).go?.main?.TrayService?.GetStatus) {
|
||||
this.status = await (window as any).go.main.TrayService.GetStatus();
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load status:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async toggleRunning() {
|
||||
try {
|
||||
if (this.status.running) {
|
||||
if ((window as any).go?.main?.TrayService?.PauseFetching) {
|
||||
await (window as any).go.main.TrayService.PauseFetching();
|
||||
}
|
||||
} else {
|
||||
if ((window as any).go?.main?.TrayService?.StartFetching) {
|
||||
await (window as any).go.main.TrayService.StartFetching();
|
||||
}
|
||||
}
|
||||
this.loadStatus();
|
||||
} catch (err) {
|
||||
console.error('Failed to toggle running:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async nextIssue() {
|
||||
try {
|
||||
if ((window as any).go?.main?.TrayService?.NextIssue) {
|
||||
await (window as any).go.main.TrayService.NextIssue();
|
||||
}
|
||||
this.loadStatus();
|
||||
} catch (err) {
|
||||
console.error('Failed to get next issue:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async skipIssue() {
|
||||
try {
|
||||
if ((window as any).go?.main?.TrayService?.SkipIssue) {
|
||||
await (window as any).go.main.TrayService.SkipIssue();
|
||||
}
|
||||
this.loadStatus();
|
||||
} catch (err) {
|
||||
console.error('Failed to skip issue:', err);
|
||||
}
|
||||
}
|
||||
|
||||
openWorkbench() {
|
||||
if ((window as any).wails?.Window) {
|
||||
(window as any).wails.Window.GetByName('workbench').then((w: any) => {
|
||||
w.Show();
|
||||
w.Focus();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
openSettings() {
|
||||
if ((window as any).wails?.Window) {
|
||||
(window as any).wails.Window.GetByName('settings').then((w: any) => {
|
||||
w.Show();
|
||||
w.Focus();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
openJellyfin() {
|
||||
window.location.assign('/jellyfin');
|
||||
}
|
||||
}
|
||||
|
|
@ -1,356 +0,0 @@
|
|||
import { Component, OnInit } from '@angular/core';
|
||||
import { CommonModule } from '@angular/common';
|
||||
import { FormsModule } from '@angular/forms';
|
||||
|
||||
interface Issue {
|
||||
id: string;
|
||||
number: number;
|
||||
repo: string;
|
||||
title: string;
|
||||
body: string;
|
||||
url: string;
|
||||
labels: string[];
|
||||
author: string;
|
||||
context?: IssueContext;
|
||||
}
|
||||
|
||||
interface IssueContext {
|
||||
summary: string;
|
||||
relevantFiles: string[];
|
||||
suggestedFix: string;
|
||||
complexity: string;
|
||||
estimatedTime: string;
|
||||
}
|
||||
|
||||
@Component({
|
||||
selector: 'app-workbench',
|
||||
standalone: true,
|
||||
imports: [CommonModule, FormsModule],
|
||||
template: `
|
||||
<div class="workbench">
|
||||
<header class="workbench-header">
|
||||
<h1>BugSETI Workbench</h1>
|
||||
<div class="header-actions">
|
||||
<button class="btn btn--secondary" (click)="skipIssue()">Skip</button>
|
||||
<button class="btn btn--success" (click)="submitPR()" [disabled]="!canSubmit">Submit PR</button>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="workbench-content" *ngIf="currentIssue">
|
||||
<aside class="issue-panel">
|
||||
<div class="card">
|
||||
<div class="card__header">
|
||||
<h2 class="card__title">Issue #{{ currentIssue.number }}</h2>
|
||||
<a [href]="currentIssue.url" target="_blank" class="btn btn--secondary btn--sm">View on GitHub</a>
|
||||
</div>
|
||||
|
||||
<h3>{{ currentIssue.title }}</h3>
|
||||
|
||||
<div class="labels">
|
||||
<span class="badge badge--primary" *ngFor="let label of currentIssue.labels">
|
||||
{{ label }}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div class="issue-meta">
|
||||
<span>{{ currentIssue.repo }}</span>
|
||||
<span>by {{ currentIssue.author }}</span>
|
||||
</div>
|
||||
|
||||
<div class="issue-body">
|
||||
<pre>{{ currentIssue.body }}</pre>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="card" *ngIf="currentIssue.context">
|
||||
<div class="card__header">
|
||||
<h2 class="card__title">AI Context</h2>
|
||||
<span class="badge" [ngClass]="{
|
||||
'badge--success': currentIssue.context.complexity === 'easy',
|
||||
'badge--warning': currentIssue.context.complexity === 'medium',
|
||||
'badge--danger': currentIssue.context.complexity === 'hard'
|
||||
}">
|
||||
{{ currentIssue.context.complexity }}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<p class="context-summary">{{ currentIssue.context.summary }}</p>
|
||||
|
||||
<div class="context-section" *ngIf="currentIssue.context.relevantFiles?.length">
|
||||
<h4>Relevant Files</h4>
|
||||
<ul class="file-list">
|
||||
<li *ngFor="let file of currentIssue.context.relevantFiles">
|
||||
<code>{{ file }}</code>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="context-section" *ngIf="currentIssue.context.suggestedFix">
|
||||
<h4>Suggested Approach</h4>
|
||||
<p>{{ currentIssue.context.suggestedFix }}</p>
|
||||
</div>
|
||||
|
||||
<div class="context-meta">
|
||||
<span>Est. time: {{ currentIssue.context.estimatedTime || 'Unknown' }}</span>
|
||||
</div>
|
||||
</div>
|
||||
</aside>
|
||||
|
||||
<main class="editor-panel">
|
||||
<div class="card">
|
||||
<div class="card__header">
|
||||
<h2 class="card__title">PR Details</h2>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">PR Title</label>
|
||||
<input type="text" class="form-input" [(ngModel)]="prTitle"
|
||||
[placeholder]="'Fix #' + currentIssue.number + ': ' + currentIssue.title">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">PR Description</label>
|
||||
<textarea class="form-textarea" [(ngModel)]="prBody" rows="8"
|
||||
placeholder="Describe your changes..."></textarea>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Branch Name</label>
|
||||
<input type="text" class="form-input" [(ngModel)]="branchName"
|
||||
[placeholder]="'bugseti/issue-' + currentIssue.number">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label class="form-label">Commit Message</label>
|
||||
<textarea class="form-textarea" [(ngModel)]="commitMessage" rows="3"
|
||||
[placeholder]="'fix: resolve issue #' + currentIssue.number"></textarea>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
</div>
|
||||
|
||||
<div class="empty-state" *ngIf="!currentIssue">
|
||||
<h2>No Issue Selected</h2>
|
||||
<p>Get an issue from the queue to start working.</p>
|
||||
<button class="btn btn--primary" (click)="nextIssue()">Get Next Issue</button>
|
||||
</div>
|
||||
</div>
|
||||
`,
|
||||
styles: [`
|
||||
.workbench {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100%;
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.workbench-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: var(--spacing-md) var(--spacing-lg);
|
||||
background-color: var(--bg-primary);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.workbench-header h1 {
|
||||
font-size: 18px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.header-actions {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.workbench-content {
|
||||
display: grid;
|
||||
grid-template-columns: 400px 1fr;
|
||||
flex: 1;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.issue-panel {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--spacing-md);
|
||||
padding: var(--spacing-md);
|
||||
overflow-y: auto;
|
||||
border-right: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.editor-panel {
|
||||
padding: var(--spacing-md);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.labels {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-xs);
|
||||
margin: var(--spacing-sm) 0;
|
||||
}
|
||||
|
||||
.issue-meta {
|
||||
display: flex;
|
||||
gap: var(--spacing-md);
|
||||
font-size: 12px;
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.issue-body {
|
||||
padding: var(--spacing-md);
|
||||
background-color: var(--bg-tertiary);
|
||||
border-radius: var(--radius-md);
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.issue-body pre {
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
font-size: 13px;
|
||||
line-height: 1.5;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.context-summary {
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.context-section {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.context-section h4 {
|
||||
font-size: 12px;
|
||||
text-transform: uppercase;
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.file-list {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.file-list li {
|
||||
padding: var(--spacing-xs) 0;
|
||||
}
|
||||
|
||||
.context-meta {
|
||||
font-size: 12px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex: 1;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.empty-state h2 {
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.empty-state p {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
`]
|
||||
})
|
||||
export class WorkbenchComponent implements OnInit {
|
||||
currentIssue: Issue | null = null;
|
||||
prTitle = '';
|
||||
prBody = '';
|
||||
branchName = '';
|
||||
commitMessage = '';
|
||||
|
||||
get canSubmit(): boolean {
|
||||
return !!this.currentIssue && !!this.prTitle;
|
||||
}
|
||||
|
||||
ngOnInit() {
|
||||
this.loadCurrentIssue();
|
||||
}
|
||||
|
||||
async loadCurrentIssue() {
|
||||
try {
|
||||
if ((window as any).go?.main?.TrayService?.GetCurrentIssue) {
|
||||
this.currentIssue = await (window as any).go.main.TrayService.GetCurrentIssue();
|
||||
if (this.currentIssue) {
|
||||
this.initDefaults();
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load current issue:', err);
|
||||
}
|
||||
}
|
||||
|
||||
initDefaults() {
|
||||
if (!this.currentIssue) return;
|
||||
|
||||
this.prTitle = `Fix #${this.currentIssue.number}: ${this.currentIssue.title}`;
|
||||
this.branchName = `bugseti/issue-${this.currentIssue.number}`;
|
||||
this.commitMessage = `fix: resolve issue #${this.currentIssue.number}\n\n${this.currentIssue.title}`;
|
||||
}
|
||||
|
||||
async nextIssue() {
|
||||
try {
|
||||
if ((window as any).go?.main?.TrayService?.NextIssue) {
|
||||
this.currentIssue = await (window as any).go.main.TrayService.NextIssue();
|
||||
if (this.currentIssue) {
|
||||
this.initDefaults();
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to get next issue:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async skipIssue() {
|
||||
try {
|
||||
if ((window as any).go?.main?.TrayService?.SkipIssue) {
|
||||
await (window as any).go.main.TrayService.SkipIssue();
|
||||
this.currentIssue = null;
|
||||
this.prTitle = '';
|
||||
this.prBody = '';
|
||||
this.branchName = '';
|
||||
this.commitMessage = '';
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to skip issue:', err);
|
||||
}
|
||||
}
|
||||
|
||||
async submitPR() {
|
||||
if (!this.currentIssue || !this.canSubmit) return;
|
||||
|
||||
try {
|
||||
if ((window as any).go?.main?.SubmitService?.Submit) {
|
||||
const result = await (window as any).go.main.SubmitService.Submit({
|
||||
issue: this.currentIssue,
|
||||
title: this.prTitle,
|
||||
body: this.prBody,
|
||||
branch: this.branchName,
|
||||
commitMsg: this.commitMessage
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
alert(`PR submitted successfully!\n\n${result.prUrl}`);
|
||||
this.currentIssue = null;
|
||||
} else {
|
||||
alert(`Failed to submit PR: ${result.error}`);
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to submit PR:', err);
|
||||
alert('Failed to submit PR. Check console for details.');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>BugSETI</title>
|
||||
<base href="/">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<link rel="icon" type="image/x-icon" href="favicon.ico">
|
||||
</head>
|
||||
<body>
|
||||
<app-root></app-root>
|
||||
</body>
|
||||
</html>
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
import { bootstrapApplication } from '@angular/platform-browser';
|
||||
import { appConfig } from './app/app.config';
|
||||
import { AppComponent } from './app/app.component';
|
||||
|
||||
bootstrapApplication(AppComponent, appConfig)
|
||||
.catch((err) => console.error(err));
|
||||
|
|
@ -1,268 +0,0 @@
|
|||
// BugSETI Global Styles
|
||||
|
||||
// CSS Variables for theming
|
||||
:root {
|
||||
// Dark theme (default)
|
||||
--bg-primary: #161b22;
|
||||
--bg-secondary: #0d1117;
|
||||
--bg-tertiary: #21262d;
|
||||
--text-primary: #c9d1d9;
|
||||
--text-secondary: #8b949e;
|
||||
--text-muted: #6e7681;
|
||||
--border-color: #30363d;
|
||||
--accent-primary: #58a6ff;
|
||||
--accent-success: #3fb950;
|
||||
--accent-warning: #d29922;
|
||||
--accent-danger: #f85149;
|
||||
|
||||
// Spacing
|
||||
--spacing-xs: 4px;
|
||||
--spacing-sm: 8px;
|
||||
--spacing-md: 16px;
|
||||
--spacing-lg: 24px;
|
||||
--spacing-xl: 32px;
|
||||
|
||||
// Border radius
|
||||
--radius-sm: 4px;
|
||||
--radius-md: 6px;
|
||||
--radius-lg: 12px;
|
||||
|
||||
// Font
|
||||
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Noto Sans', Helvetica, Arial, sans-serif;
|
||||
--font-mono: ui-monospace, SFMono-Regular, SF Mono, Menlo, Consolas, Liberation Mono, monospace;
|
||||
}
|
||||
|
||||
// Light theme
|
||||
[data-theme="light"] {
|
||||
--bg-primary: #ffffff;
|
||||
--bg-secondary: #f6f8fa;
|
||||
--bg-tertiary: #f0f3f6;
|
||||
--text-primary: #24292f;
|
||||
--text-secondary: #57606a;
|
||||
--text-muted: #8b949e;
|
||||
--border-color: #d0d7de;
|
||||
--accent-primary: #0969da;
|
||||
--accent-success: #1a7f37;
|
||||
--accent-warning: #9a6700;
|
||||
--accent-danger: #cf222e;
|
||||
}
|
||||
|
||||
// Reset
|
||||
*,
|
||||
*::before,
|
||||
*::after {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
html, body {
|
||||
height: 100%;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: var(--font-family);
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
color: var(--text-primary);
|
||||
background-color: var(--bg-primary);
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
// Typography
|
||||
h1, h2, h3, h4, h5, h6 {
|
||||
font-weight: 600;
|
||||
line-height: 1.25;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
}
|
||||
|
||||
h1 { font-size: 24px; }
|
||||
h2 { font-size: 20px; }
|
||||
h3 { font-size: 16px; }
|
||||
h4 { font-size: 14px; }
|
||||
|
||||
p {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
a {
|
||||
color: var(--accent-primary);
|
||||
text-decoration: none;
|
||||
|
||||
&:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
}
|
||||
|
||||
code {
|
||||
font-family: var(--font-mono);
|
||||
font-size: 12px;
|
||||
padding: 2px 6px;
|
||||
background-color: var(--bg-tertiary);
|
||||
border-radius: var(--radius-sm);
|
||||
}
|
||||
|
||||
// Buttons
|
||||
.btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: var(--spacing-xs);
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
line-height: 1;
|
||||
border: 1px solid transparent;
|
||||
border-radius: var(--radius-md);
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
|
||||
&:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
&--primary {
|
||||
background-color: var(--accent-primary);
|
||||
color: white;
|
||||
|
||||
&:hover:not(:disabled) {
|
||||
opacity: 0.9;
|
||||
}
|
||||
}
|
||||
|
||||
&--secondary {
|
||||
background-color: var(--bg-tertiary);
|
||||
border-color: var(--border-color);
|
||||
color: var(--text-primary);
|
||||
|
||||
&:hover:not(:disabled) {
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
}
|
||||
|
||||
&--success {
|
||||
background-color: var(--accent-success);
|
||||
color: white;
|
||||
}
|
||||
|
||||
&--danger {
|
||||
background-color: var(--accent-danger);
|
||||
color: white;
|
||||
}
|
||||
}
|
||||
|
||||
// Forms
|
||||
.form-group {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.form-label {
|
||||
display: block;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
font-weight: 500;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.form-input,
|
||||
.form-select,
|
||||
.form-textarea {
|
||||
width: 100%;
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
font-size: 14px;
|
||||
background-color: var(--bg-secondary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-md);
|
||||
color: var(--text-primary);
|
||||
|
||||
&:focus {
|
||||
outline: none;
|
||||
border-color: var(--accent-primary);
|
||||
box-shadow: 0 0 0 3px rgba(88, 166, 255, 0.2);
|
||||
}
|
||||
|
||||
&::placeholder {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
}
|
||||
|
||||
.form-textarea {
|
||||
resize: vertical;
|
||||
min-height: 100px;
|
||||
}
|
||||
|
||||
// Cards
|
||||
.card {
|
||||
background-color: var(--bg-secondary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--spacing-md);
|
||||
|
||||
&__header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
margin-bottom: var(--spacing-md);
|
||||
padding-bottom: var(--spacing-sm);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
&__title {
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
}
|
||||
}
|
||||
|
||||
// Badges
|
||||
.badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 2px 8px;
|
||||
font-size: 12px;
|
||||
font-weight: 500;
|
||||
border-radius: 999px;
|
||||
|
||||
&--primary {
|
||||
background-color: rgba(88, 166, 255, 0.15);
|
||||
color: var(--accent-primary);
|
||||
}
|
||||
|
||||
&--success {
|
||||
background-color: rgba(63, 185, 80, 0.15);
|
||||
color: var(--accent-success);
|
||||
}
|
||||
|
||||
&--warning {
|
||||
background-color: rgba(210, 153, 34, 0.15);
|
||||
color: var(--accent-warning);
|
||||
}
|
||||
|
||||
&--danger {
|
||||
background-color: rgba(248, 81, 73, 0.15);
|
||||
color: var(--accent-danger);
|
||||
}
|
||||
}
|
||||
|
||||
// Utility classes
|
||||
.text-center { text-align: center; }
|
||||
.text-right { text-align: right; }
|
||||
.text-muted { color: var(--text-muted); }
|
||||
.text-success { color: var(--accent-success); }
|
||||
.text-danger { color: var(--accent-danger); }
|
||||
.text-warning { color: var(--accent-warning); }
|
||||
|
||||
.flex { display: flex; }
|
||||
.flex-col { flex-direction: column; }
|
||||
.items-center { align-items: center; }
|
||||
.justify-between { justify-content: space-between; }
|
||||
.gap-sm { gap: var(--spacing-sm); }
|
||||
.gap-md { gap: var(--spacing-md); }
|
||||
|
||||
.mt-sm { margin-top: var(--spacing-sm); }
|
||||
.mt-md { margin-top: var(--spacing-md); }
|
||||
.mb-sm { margin-bottom: var(--spacing-sm); }
|
||||
.mb-md { margin-bottom: var(--spacing-md); }
|
||||
|
||||
.hidden { display: none; }
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
{
|
||||
"extends": "./tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./out-tsc/app",
|
||||
"types": []
|
||||
},
|
||||
"files": [
|
||||
"src/main.ts"
|
||||
],
|
||||
"include": [
|
||||
"src/**/*.d.ts"
|
||||
]
|
||||
}
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
{
|
||||
"compileOnSave": false,
|
||||
"compilerOptions": {
|
||||
"baseUrl": "./",
|
||||
"outDir": "./dist/out-tsc",
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"strict": true,
|
||||
"noImplicitOverride": true,
|
||||
"noPropertyAccessFromIndexSignature": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"esModuleInterop": true,
|
||||
"sourceMap": true,
|
||||
"declaration": false,
|
||||
"experimentalDecorators": true,
|
||||
"moduleResolution": "bundler",
|
||||
"importHelpers": true,
|
||||
"target": "ES2022",
|
||||
"module": "ES2022",
|
||||
"lib": [
|
||||
"ES2022",
|
||||
"dom"
|
||||
],
|
||||
"paths": {
|
||||
"@app/*": ["src/app/*"],
|
||||
"@shared/*": ["src/app/shared/*"]
|
||||
}
|
||||
},
|
||||
"angularCompilerOptions": {
|
||||
"enableI18nLegacyMessageIdFormat": false,
|
||||
"strictInjectionParameters": true,
|
||||
"strictInputAccessModifiers": true,
|
||||
"strictTemplates": true
|
||||
}
|
||||
}
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
{
|
||||
"extends": "./tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./out-tsc/spec",
|
||||
"types": [
|
||||
"jasmine"
|
||||
]
|
||||
},
|
||||
"include": [
|
||||
"src/**/*.spec.ts",
|
||||
"src/**/*.d.ts"
|
||||
]
|
||||
}
|
||||
|
|
@ -1,88 +0,0 @@
|
|||
module forge.lthn.ai/core/go/cmd/bugseti
|
||||
|
||||
go 1.25.5
|
||||
|
||||
require (
|
||||
forge.lthn.ai/core/go v0.0.0
|
||||
forge.lthn.ai/core/go/internal/bugseti v0.0.0
|
||||
forge.lthn.ai/core/go/internal/bugseti/updater v0.0.0
|
||||
github.com/Snider/Borg v0.2.0
|
||||
forge.lthn.ai/core/go v0.0.0
|
||||
forge.lthn.ai/core/go/internal/bugseti v0.0.0
|
||||
forge.lthn.ai/core/go/internal/bugseti/updater v0.0.0
|
||||
github.com/wailsapp/wails/v3 v3.0.0-alpha.64
|
||||
)
|
||||
|
||||
replace forge.lthn.ai/core/go => ../..
|
||||
|
||||
replace forge.lthn.ai/core/go/internal/bugseti => ../../internal/bugseti
|
||||
|
||||
replace forge.lthn.ai/core/go/internal/bugseti/updater => ../../internal/bugseti/updater
|
||||
|
||||
require (
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 // indirect
|
||||
dario.cat/mergo v1.0.2 // indirect
|
||||
github.com/42wim/httpsig v1.2.3 // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||
github.com/Snider/Enchantrix v0.0.2 // indirect
|
||||
github.com/adrg/xdg v0.5.3 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/bep/debounce v1.2.1 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/cloudflare/circl v1.6.3 // indirect
|
||||
github.com/coder/websocket v1.8.14 // indirect
|
||||
github.com/cyphar/filepath-securejoin v0.6.1 // indirect
|
||||
github.com/davidmz/go-pageant v1.0.2 // indirect
|
||||
github.com/ebitengine/purego v0.9.1 // indirect
|
||||
github.com/emirpasic/gods v1.18.1 // indirect
|
||||
github.com/fsnotify/fsnotify v1.9.0 // indirect
|
||||
github.com/go-fed/httpsig v1.1.0 // indirect
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
||||
github.com/go-git/go-billy/v5 v5.7.0 // indirect
|
||||
github.com/go-git/go-git/v5 v5.16.4 // indirect
|
||||
github.com/go-ole/go-ole v1.3.0 // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
|
||||
github.com/godbus/dbus/v5 v5.2.2 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/hashicorp/go-version v1.7.0 // indirect
|
||||
github.com/invopop/jsonschema v0.13.0 // indirect
|
||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
||||
github.com/jchv/go-winloader v0.0.0-20250406163304-c1995be93bd1 // indirect
|
||||
github.com/kevinburke/ssh_config v1.4.0 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
github.com/leaanthony/go-ansi-parser v1.6.1 // indirect
|
||||
github.com/leaanthony/u v1.1.1 // indirect
|
||||
github.com/lmittmann/tint v1.1.2 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/mark3labs/mcp-go v0.43.2 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
|
||||
github.com/pjbgf/sha1cd v0.5.0 // indirect
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
||||
github.com/rivo/uniseg v0.4.7 // indirect
|
||||
github.com/sagikazarmark/locafero v0.11.0 // indirect
|
||||
github.com/samber/lo v1.52.0 // indirect
|
||||
github.com/sergi/go-diff v1.4.0 // indirect
|
||||
github.com/skeema/knownhosts v1.3.2 // indirect
|
||||
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
|
||||
github.com/spf13/afero v1.15.0 // indirect
|
||||
github.com/spf13/cast v1.10.0 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/spf13/viper v1.21.0 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/wailsapp/go-webview2 v1.0.23 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/crypto v0.47.0 // indirect
|
||||
golang.org/x/mod v0.32.0 // indirect
|
||||
golang.org/x/net v0.49.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/text v0.33.0 // indirect
|
||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
|
@ -1,181 +0,0 @@
|
|||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 h1:HTCWpzyWQOHDWt3LzI6/d2jvUDsw/vgGRWm/8BTvcqI=
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
github.com/42wim/httpsig v1.2.3 h1:xb0YyWhkYj57SPtfSttIobJUPJZB9as1nsfo7KWVcEs=
|
||||
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
||||
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
||||
github.com/Snider/Borg v0.2.0 h1:iCyDhY4WTXi39+FexRwXbn2YpZ2U9FUXVXDZk9xRCXQ=
|
||||
github.com/Snider/Borg v0.2.0/go.mod h1:TqlKnfRo9okioHbgrZPfWjQsztBV0Nfskz4Om1/vdMY=
|
||||
github.com/Snider/Enchantrix v0.0.2 h1:ExZQiBhfS/p/AHFTKhY80TOd+BXZjK95EzByAEgwvjs=
|
||||
github.com/Snider/Enchantrix v0.0.2/go.mod h1:CtFcLAvnDT1KcuF1JBb/DJj0KplY8jHryO06KzQ1hsQ=
|
||||
github.com/adrg/xdg v0.5.3 h1:xRnxJXne7+oWDatRhR1JLnvuccuIeCoBu2rtuLqQB78=
|
||||
github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ=
|
||||
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
|
||||
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bep/debounce v1.2.1 h1:v67fRdBA9UQu2NhLFXrSg0Brw7CexQekrBwDMM8bzeY=
|
||||
github.com/bep/debounce v1.2.1/go.mod h1:H8yggRPQKLUhUoqrJC1bO2xNya7vanpDl7xR3ISbCJ0=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
|
||||
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
|
||||
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
|
||||
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
|
||||
github.com/cyphar/filepath-securejoin v0.6.1 h1:5CeZ1jPXEiYt3+Z6zqprSAgSWiggmpVyciv8syjIpVE=
|
||||
github.com/cyphar/filepath-securejoin v0.6.1/go.mod h1:A8hd4EnAeyujCJRrICiOWqjS1AX0a9kM5XL+NwKoYSc=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davidmz/go-pageant v1.0.2 h1:bPblRCh5jGU+Uptpz6LgMZGD5hJoOt7otgT454WvHn0=
|
||||
github.com/ebitengine/purego v0.9.1 h1:a/k2f2HQU3Pi399RPW1MOaZyhKJL9w/xFpKAg4q1s0A=
|
||||
github.com/ebitengine/purego v0.9.1/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||
github.com/elazarl/goproxy v1.7.2 h1:Y2o6urb7Eule09PjlhQRGNsqRfPmYI3KKQLFpCAV3+o=
|
||||
github.com/elazarl/goproxy v1.7.2/go.mod h1:82vkLNir0ALaW14Rc399OTTjyNREgmdL2cVoIbS6XaE=
|
||||
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
|
||||
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
|
||||
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
||||
github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
|
||||
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
|
||||
github.com/go-fed/httpsig v1.1.0 h1:9M+hb0jkEICD8/cAiNqEB66R87tTINszBRTjwjQzWcI=
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
|
||||
github.com/go-git/go-billy/v5 v5.7.0 h1:83lBUJhGWhYp0ngzCMSgllhUSuoHP1iEWYjsPl9nwqM=
|
||||
github.com/go-git/go-billy/v5 v5.7.0/go.mod h1:/1IUejTKH8xipsAcdfcSAlUlo2J7lkYV8GTKxAT/L3E=
|
||||
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
|
||||
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
|
||||
github.com/go-git/go-git/v5 v5.16.4 h1:7ajIEZHZJULcyJebDLo99bGgS0jRrOxzZG4uCk2Yb2Y=
|
||||
github.com/go-git/go-git/v5 v5.16.4/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
|
||||
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e h1:Lf/gRkoycfOBPa42vU2bbgPurFong6zXeFtPoxholzU=
|
||||
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e/go.mod h1:uNVvRXArCGbZ508SxYYTC5v1JWoz2voff5pm25jU1Ok=
|
||||
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
|
||||
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
|
||||
github.com/godbus/dbus/v5 v5.2.2 h1:TUR3TgtSVDmjiXOgAAyaZbYmIeP3DPkld3jgKGV8mXQ=
|
||||
github.com/godbus/dbus/v5 v5.2.2/go.mod h1:3AAv2+hPq5rdnr5txxxRwiGjPXamgoIHgz9FPBfOp3c=
|
||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
|
||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
|
||||
github.com/invopop/jsonschema v0.13.0 h1:KvpoAJWEjR3uD9Kbm2HWJmqsEaHt8lBUpd0qHcIi21E=
|
||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
|
||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
|
||||
github.com/jchv/go-winloader v0.0.0-20250406163304-c1995be93bd1 h1:njuLRcjAuMKr7kI3D85AXWkw6/+v9PwtV6M6o11sWHQ=
|
||||
github.com/jchv/go-winloader v0.0.0-20250406163304-c1995be93bd1/go.mod h1:alcuEEnZsY1WQsagKhZDsoPCRoOijYqhZvPwLG0kzVs=
|
||||
github.com/kevinburke/ssh_config v1.4.0 h1:6xxtP5bZ2E4NF5tuQulISpTO2z8XbtH8cg1PWkxoFkQ=
|
||||
github.com/kevinburke/ssh_config v1.4.0/go.mod h1:q2RIzfka+BXARoNexmF9gkxEX7DmvbW9P4hIVx2Kg4M=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/leaanthony/go-ansi-parser v1.6.1 h1:xd8bzARK3dErqkPFtoF9F3/HgN8UQk0ed1YDKpEz01A=
|
||||
github.com/leaanthony/go-ansi-parser v1.6.1/go.mod h1:+vva/2y4alzVmmIEpk9QDhA7vLC5zKDTRwfZGOp3IWU=
|
||||
github.com/leaanthony/u v1.1.1 h1:TUFjwDGlNX+WuwVEzDqQwC2lOv0P4uhTQw7CMFdiK7M=
|
||||
github.com/leaanthony/u v1.1.1/go.mod h1:9+o6hejoRljvZ3BzdYlVL0JYCwtnAsVuN9pVTQcaRfI=
|
||||
github.com/lmittmann/tint v1.1.2 h1:2CQzrL6rslrsyjqLDwD11bZ5OpLBPU+g3G/r5LSfS8w=
|
||||
github.com/lmittmann/tint v1.1.2/go.mod h1:HIS3gSy7qNwGCj+5oRjAutErFBl4BzdQP6cJZ0NfMwE=
|
||||
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
|
||||
github.com/mark3labs/mcp-go v0.43.2 h1:21PUSlWWiSbUPQwXIJ5WKlETixpFpq+WBpbMGDSVy/I=
|
||||
github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||
github.com/matryer/is v1.4.1 h1:55ehd8zaGABKLXQUe2awZ99BD/PTc2ls+KV/dXphgEQ=
|
||||
github.com/matryer/is v1.4.1/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
|
||||
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
|
||||
github.com/pjbgf/sha1cd v0.5.0 h1:a+UkboSi1znleCDUNT3M5YxjOnN1fz2FhN48FlwCxs0=
|
||||
github.com/pjbgf/sha1cd v0.5.0/go.mod h1:lhpGlyHLpQZoxMv8HcgXvZEhcGs0PG/vsZnEJ7H0iCM=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
|
||||
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
|
||||
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
|
||||
github.com/sergi/go-diff v1.4.0 h1:n/SP9D5ad1fORl+llWyN+D6qoUETXNZARKjyY2/KVCw=
|
||||
github.com/sergi/go-diff v1.4.0/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
|
||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/skeema/knownhosts v1.3.2 h1:EDL9mgf4NzwMXCTfaxSD/o/a5fxDw/xL9nkU28JjdBg=
|
||||
github.com/skeema/knownhosts v1.3.2/go.mod h1:bEg3iQAuw+jyiw+484wwFJoKSLwcfd7fqRy+N0QTiow=
|
||||
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
|
||||
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
|
||||
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
|
||||
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
||||
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/wailsapp/go-webview2 v1.0.23 h1:jmv8qhz1lHibCc79bMM/a/FqOnnzOGEisLav+a0b9P0=
|
||||
github.com/wailsapp/go-webview2 v1.0.23/go.mod h1:qJmWAmAmaniuKGZPWwne+uor3AHMB5PFhqiK0Bbj8kc=
|
||||
github.com/wailsapp/wails/v3 v3.0.0-alpha.64 h1:xAhLFVfdbg7XdZQ5mMQmBv2BglWu8hMqe50Z+3UJvBs=
|
||||
github.com/wailsapp/wails/v3 v3.0.0-alpha.64/go.mod h1:zvgNL/mlFcX8aRGu6KOz9AHrMmTBD+4hJRQIONqF/Yw=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
|
||||
github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
|
||||
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
|
||||
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
|
||||
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
|
||||
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
|
||||
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200810151505-1b9f1253b3ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
|
||||
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
|
||||
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 172 B |
|
|
@ -1,25 +0,0 @@
|
|||
// Package icons provides embedded icon assets for the BugSETI application.
|
||||
package icons
|
||||
|
||||
import _ "embed"
|
||||
|
||||
// TrayTemplate is the template icon for macOS systray (22x22 PNG, black on transparent).
|
||||
// Template icons automatically adapt to light/dark mode on macOS.
|
||||
//
|
||||
//go:embed tray-template.png
|
||||
var TrayTemplate []byte
|
||||
|
||||
// TrayLight is the light mode icon for Windows/Linux systray.
|
||||
//
|
||||
//go:embed tray-light.png
|
||||
var TrayLight []byte
|
||||
|
||||
// TrayDark is the dark mode icon for Windows/Linux systray.
|
||||
//
|
||||
//go:embed tray-dark.png
|
||||
var TrayDark []byte
|
||||
|
||||
// AppIcon is the main application icon.
|
||||
//
|
||||
//go:embed appicon.png
|
||||
var AppIcon []byte
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 171 B |
Binary file not shown.
|
Before Width: | Height: | Size: 171 B |
Binary file not shown.
|
Before Width: | Height: | Size: 153 B |
|
|
@ -1,290 +0,0 @@
|
|||
// Package main provides the BugSETI system tray application.
|
||||
// BugSETI - "Distributed Bug Fixing like SETI@home but for code"
|
||||
//
|
||||
// The application runs as a system tray app that:
|
||||
// - Pulls OSS issues from Forgejo
|
||||
// - Uses AI to prepare context for each issue
|
||||
// - Presents issues to users for fixing
|
||||
// - Automates PR submission
|
||||
package main
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"io/fs"
|
||||
"log"
|
||||
"net/http"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/cmd/bugseti/icons"
|
||||
"forge.lthn.ai/core/cli/internal/bugseti"
|
||||
"forge.lthn.ai/core/cli/internal/bugseti/updater"
|
||||
"github.com/wailsapp/wails/v3/pkg/application"
|
||||
"github.com/wailsapp/wails/v3/pkg/events"
|
||||
)
|
||||
|
||||
//go:embed all:frontend/dist/bugseti/browser
|
||||
var assets embed.FS
|
||||
|
||||
func main() {
|
||||
// Strip the embed path prefix so files are served from root
|
||||
staticAssets, err := fs.Sub(assets, "frontend/dist/bugseti/browser")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Initialize the config service
|
||||
configService := bugseti.NewConfigService()
|
||||
if err := configService.Load(); err != nil {
|
||||
log.Printf("Warning: Could not load config: %v", err)
|
||||
}
|
||||
|
||||
// Check Forgejo API availability
|
||||
forgeClient, err := bugseti.CheckForge()
|
||||
if err != nil {
|
||||
log.Fatalf("Forgejo check failed: %v\n\nConfigure with: core forge config --url URL --token TOKEN", err)
|
||||
}
|
||||
|
||||
// Initialize core services
|
||||
notifyService := bugseti.NewNotifyService(configService)
|
||||
statsService := bugseti.NewStatsService(configService)
|
||||
fetcherService := bugseti.NewFetcherService(configService, notifyService, forgeClient)
|
||||
queueService := bugseti.NewQueueService(configService)
|
||||
seederService := bugseti.NewSeederService(configService, forgeClient.URL(), forgeClient.Token())
|
||||
submitService := bugseti.NewSubmitService(configService, notifyService, statsService, forgeClient)
|
||||
hubService := bugseti.NewHubService(configService)
|
||||
versionService := bugseti.NewVersionService()
|
||||
workspaceService := NewWorkspaceService(configService)
|
||||
|
||||
// Initialize update service
|
||||
updateService, err := updater.NewService(configService)
|
||||
if err != nil {
|
||||
log.Printf("Warning: Could not initialize update service: %v", err)
|
||||
}
|
||||
|
||||
// Create the tray service (we'll set the app reference later)
|
||||
trayService := NewTrayService(nil)
|
||||
|
||||
// Build services list
|
||||
services := []application.Service{
|
||||
application.NewService(configService),
|
||||
application.NewService(notifyService),
|
||||
application.NewService(statsService),
|
||||
application.NewService(fetcherService),
|
||||
application.NewService(queueService),
|
||||
application.NewService(seederService),
|
||||
application.NewService(submitService),
|
||||
application.NewService(versionService),
|
||||
application.NewService(workspaceService),
|
||||
application.NewService(hubService),
|
||||
application.NewService(trayService),
|
||||
}
|
||||
|
||||
// Add update service if available
|
||||
if updateService != nil {
|
||||
services = append(services, application.NewService(updateService))
|
||||
}
|
||||
|
||||
// Create the application
|
||||
app := application.New(application.Options{
|
||||
Name: "BugSETI",
|
||||
Description: "Distributed Bug Fixing - like SETI@home but for code",
|
||||
Services: services,
|
||||
Assets: application.AssetOptions{
|
||||
Handler: spaHandler(staticAssets),
|
||||
},
|
||||
Mac: application.MacOptions{
|
||||
ActivationPolicy: application.ActivationPolicyAccessory,
|
||||
},
|
||||
})
|
||||
|
||||
// Set the app reference and services in tray service
|
||||
trayService.app = app
|
||||
trayService.SetServices(fetcherService, queueService, configService, statsService)
|
||||
|
||||
// Set up system tray
|
||||
setupSystemTray(app, fetcherService, queueService, configService)
|
||||
|
||||
// Start update service background checker
|
||||
if updateService != nil {
|
||||
updateService.Start()
|
||||
}
|
||||
|
||||
log.Println("Starting BugSETI...")
|
||||
log.Println(" - System tray active")
|
||||
log.Println(" - Waiting for issues...")
|
||||
log.Printf(" - Version: %s (%s)", bugseti.GetVersion(), bugseti.GetChannel())
|
||||
|
||||
// Attempt hub registration (non-blocking)
|
||||
if hubURL := configService.GetHubURL(); hubURL != "" {
|
||||
if err := hubService.AutoRegister(); err != nil {
|
||||
log.Printf(" - Hub: auto-register skipped: %v", err)
|
||||
} else if err := hubService.Register(); err != nil {
|
||||
log.Printf(" - Hub: registration failed: %v", err)
|
||||
} else {
|
||||
log.Println(" - Hub: registered with portal")
|
||||
}
|
||||
} else {
|
||||
log.Println(" - Hub: not configured (set hubUrl in config)")
|
||||
}
|
||||
|
||||
if err := app.Run(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Stop update service on exit
|
||||
if updateService != nil {
|
||||
updateService.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
// setupSystemTray configures the system tray icon and menu
|
||||
func setupSystemTray(app *application.App, fetcher *bugseti.FetcherService, queue *bugseti.QueueService, config *bugseti.ConfigService) {
|
||||
systray := app.SystemTray.New()
|
||||
systray.SetTooltip("BugSETI - Distributed Bug Fixing")
|
||||
|
||||
// Set tray icon based on OS
|
||||
if runtime.GOOS == "darwin" {
|
||||
systray.SetTemplateIcon(icons.TrayTemplate)
|
||||
} else {
|
||||
systray.SetDarkModeIcon(icons.TrayDark)
|
||||
systray.SetIcon(icons.TrayLight)
|
||||
}
|
||||
|
||||
// Create tray panel window (workbench preview)
|
||||
trayWindow := app.Window.NewWithOptions(application.WebviewWindowOptions{
|
||||
Name: "tray-panel",
|
||||
Title: "BugSETI",
|
||||
Width: 420,
|
||||
Height: 520,
|
||||
URL: "/tray",
|
||||
Hidden: true,
|
||||
Frameless: true,
|
||||
BackgroundColour: application.NewRGB(22, 27, 34),
|
||||
})
|
||||
systray.AttachWindow(trayWindow).WindowOffset(5)
|
||||
|
||||
// Create main workbench window
|
||||
workbenchWindow := app.Window.NewWithOptions(application.WebviewWindowOptions{
|
||||
Name: "workbench",
|
||||
Title: "BugSETI Workbench",
|
||||
Width: 1200,
|
||||
Height: 800,
|
||||
URL: "/workbench",
|
||||
Hidden: true,
|
||||
BackgroundColour: application.NewRGB(22, 27, 34),
|
||||
})
|
||||
|
||||
// Create settings window
|
||||
settingsWindow := app.Window.NewWithOptions(application.WebviewWindowOptions{
|
||||
Name: "settings",
|
||||
Title: "BugSETI Settings",
|
||||
Width: 600,
|
||||
Height: 500,
|
||||
URL: "/settings",
|
||||
Hidden: true,
|
||||
BackgroundColour: application.NewRGB(22, 27, 34),
|
||||
})
|
||||
|
||||
// Create onboarding window
|
||||
onboardingWindow := app.Window.NewWithOptions(application.WebviewWindowOptions{
|
||||
Name: "onboarding",
|
||||
Title: "Welcome to BugSETI",
|
||||
Width: 700,
|
||||
Height: 600,
|
||||
URL: "/onboarding",
|
||||
Hidden: true,
|
||||
BackgroundColour: application.NewRGB(22, 27, 34),
|
||||
})
|
||||
|
||||
// Build tray menu
|
||||
trayMenu := app.Menu.New()
|
||||
|
||||
// Status item (dynamic)
|
||||
statusItem := trayMenu.Add("Status: Idle")
|
||||
statusItem.SetEnabled(false)
|
||||
|
||||
trayMenu.AddSeparator()
|
||||
|
||||
// Start/Pause toggle
|
||||
startPauseItem := trayMenu.Add("Start Fetching")
|
||||
startPauseItem.OnClick(func(ctx *application.Context) {
|
||||
if fetcher.IsRunning() {
|
||||
fetcher.Pause()
|
||||
startPauseItem.SetLabel("Start Fetching")
|
||||
statusItem.SetLabel("Status: Paused")
|
||||
} else {
|
||||
fetcher.Start()
|
||||
startPauseItem.SetLabel("Pause")
|
||||
statusItem.SetLabel("Status: Running")
|
||||
}
|
||||
})
|
||||
|
||||
trayMenu.AddSeparator()
|
||||
|
||||
// Current Issue
|
||||
currentIssueItem := trayMenu.Add("Current Issue: None")
|
||||
currentIssueItem.OnClick(func(ctx *application.Context) {
|
||||
if issue := queue.CurrentIssue(); issue != nil {
|
||||
workbenchWindow.Show()
|
||||
workbenchWindow.Focus()
|
||||
}
|
||||
})
|
||||
|
||||
// Open Workbench
|
||||
trayMenu.Add("Open Workbench").OnClick(func(ctx *application.Context) {
|
||||
workbenchWindow.Show()
|
||||
workbenchWindow.Focus()
|
||||
})
|
||||
|
||||
trayMenu.AddSeparator()
|
||||
|
||||
// Settings
|
||||
trayMenu.Add("Settings...").OnClick(func(ctx *application.Context) {
|
||||
settingsWindow.Show()
|
||||
settingsWindow.Focus()
|
||||
})
|
||||
|
||||
// Stats submenu
|
||||
statsMenu := trayMenu.AddSubmenu("Stats")
|
||||
statsMenu.Add("Issues Fixed: 0").SetEnabled(false)
|
||||
statsMenu.Add("PRs Merged: 0").SetEnabled(false)
|
||||
statsMenu.Add("Repos Contributed: 0").SetEnabled(false)
|
||||
|
||||
trayMenu.AddSeparator()
|
||||
|
||||
// Quit
|
||||
trayMenu.Add("Quit BugSETI").OnClick(func(ctx *application.Context) {
|
||||
app.Quit()
|
||||
})
|
||||
|
||||
systray.SetMenu(trayMenu)
|
||||
|
||||
// Check if onboarding needed (deferred until app is running)
|
||||
app.Event.RegisterApplicationEventHook(events.Common.ApplicationStarted, func(event *application.ApplicationEvent) {
|
||||
if !config.IsOnboarded() {
|
||||
onboardingWindow.Show()
|
||||
onboardingWindow.Focus()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// spaHandler wraps an fs.FS to serve static files with SPA fallback.
|
||||
// If the requested path doesn't match a real file, it serves index.html
|
||||
// so Angular's client-side router can handle the route.
|
||||
func spaHandler(fsys fs.FS) http.Handler {
|
||||
fileServer := http.FileServer(http.FS(fsys))
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
path := strings.TrimPrefix(r.URL.Path, "/")
|
||||
if path == "" {
|
||||
path = "index.html"
|
||||
}
|
||||
|
||||
// Check if the file exists
|
||||
if _, err := fs.Stat(fsys, path); err != nil {
|
||||
// File doesn't exist — serve index.html for SPA routing
|
||||
r.URL.Path = "/"
|
||||
}
|
||||
fileServer.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,158 +0,0 @@
|
|||
// Package main provides the BugSETI system tray application.
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
|
||||
"forge.lthn.ai/core/cli/internal/bugseti"
|
||||
"github.com/wailsapp/wails/v3/pkg/application"
|
||||
)
|
||||
|
||||
// TrayService provides system tray bindings for the frontend.
|
||||
type TrayService struct {
|
||||
app *application.App
|
||||
fetcher *bugseti.FetcherService
|
||||
queue *bugseti.QueueService
|
||||
config *bugseti.ConfigService
|
||||
stats *bugseti.StatsService
|
||||
}
|
||||
|
||||
// NewTrayService creates a new TrayService instance.
|
||||
func NewTrayService(app *application.App) *TrayService {
|
||||
return &TrayService{
|
||||
app: app,
|
||||
}
|
||||
}
|
||||
|
||||
// SetServices sets the service references after initialization.
|
||||
func (t *TrayService) SetServices(fetcher *bugseti.FetcherService, queue *bugseti.QueueService, config *bugseti.ConfigService, stats *bugseti.StatsService) {
|
||||
t.fetcher = fetcher
|
||||
t.queue = queue
|
||||
t.config = config
|
||||
t.stats = stats
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (t *TrayService) ServiceName() string {
|
||||
return "TrayService"
|
||||
}
|
||||
|
||||
// ServiceStartup is called when the Wails application starts.
|
||||
func (t *TrayService) ServiceStartup(ctx context.Context, options application.ServiceOptions) error {
|
||||
log.Println("TrayService started")
|
||||
return nil
|
||||
}
|
||||
|
||||
// ServiceShutdown is called when the Wails application shuts down.
|
||||
func (t *TrayService) ServiceShutdown() error {
|
||||
log.Println("TrayService shutdown")
|
||||
return nil
|
||||
}
|
||||
|
||||
// TrayStatus represents the current status of the tray.
|
||||
type TrayStatus struct {
|
||||
Running bool `json:"running"`
|
||||
CurrentIssue string `json:"currentIssue"`
|
||||
QueueSize int `json:"queueSize"`
|
||||
IssuesFixed int `json:"issuesFixed"`
|
||||
PRsMerged int `json:"prsMerged"`
|
||||
}
|
||||
|
||||
// GetStatus returns the current tray status.
|
||||
func (t *TrayService) GetStatus() TrayStatus {
|
||||
var currentIssue string
|
||||
if t.queue != nil {
|
||||
if issue := t.queue.CurrentIssue(); issue != nil {
|
||||
currentIssue = issue.Title
|
||||
}
|
||||
}
|
||||
|
||||
var queueSize int
|
||||
if t.queue != nil {
|
||||
queueSize = t.queue.Size()
|
||||
}
|
||||
|
||||
var running bool
|
||||
if t.fetcher != nil {
|
||||
running = t.fetcher.IsRunning()
|
||||
}
|
||||
|
||||
var issuesFixed, prsMerged int
|
||||
if t.stats != nil {
|
||||
stats := t.stats.GetStats()
|
||||
issuesFixed = stats.IssuesAttempted
|
||||
prsMerged = stats.PRsMerged
|
||||
}
|
||||
|
||||
return TrayStatus{
|
||||
Running: running,
|
||||
CurrentIssue: currentIssue,
|
||||
QueueSize: queueSize,
|
||||
IssuesFixed: issuesFixed,
|
||||
PRsMerged: prsMerged,
|
||||
}
|
||||
}
|
||||
|
||||
// StartFetching starts the issue fetcher.
|
||||
func (t *TrayService) StartFetching() error {
|
||||
if t.fetcher == nil {
|
||||
return nil
|
||||
}
|
||||
return t.fetcher.Start()
|
||||
}
|
||||
|
||||
// PauseFetching pauses the issue fetcher.
|
||||
func (t *TrayService) PauseFetching() {
|
||||
if t.fetcher != nil {
|
||||
t.fetcher.Pause()
|
||||
}
|
||||
}
|
||||
|
||||
// GetCurrentIssue returns the current issue being worked on.
|
||||
func (t *TrayService) GetCurrentIssue() *bugseti.Issue {
|
||||
if t.queue == nil {
|
||||
return nil
|
||||
}
|
||||
return t.queue.CurrentIssue()
|
||||
}
|
||||
|
||||
// NextIssue moves to the next issue in the queue.
|
||||
func (t *TrayService) NextIssue() *bugseti.Issue {
|
||||
if t.queue == nil {
|
||||
return nil
|
||||
}
|
||||
return t.queue.Next()
|
||||
}
|
||||
|
||||
// SkipIssue skips the current issue.
|
||||
func (t *TrayService) SkipIssue() {
|
||||
if t.queue == nil {
|
||||
return
|
||||
}
|
||||
t.queue.Skip()
|
||||
}
|
||||
|
||||
// ShowWindow shows a specific window by name.
|
||||
func (t *TrayService) ShowWindow(name string) {
|
||||
if t.app == nil {
|
||||
return
|
||||
}
|
||||
// Window will be shown by the frontend via Wails runtime
|
||||
}
|
||||
|
||||
// IsOnboarded returns whether the user has completed onboarding.
|
||||
func (t *TrayService) IsOnboarded() bool {
|
||||
if t.config == nil {
|
||||
return false
|
||||
}
|
||||
return t.config.IsOnboarded()
|
||||
}
|
||||
|
||||
// CompleteOnboarding marks onboarding as complete.
|
||||
func (t *TrayService) CompleteOnboarding() error {
|
||||
if t.config == nil {
|
||||
return nil
|
||||
}
|
||||
return t.config.CompleteOnboarding()
|
||||
}
|
||||
|
|
@ -1,374 +0,0 @@
|
|||
// Package main provides the BugSETI system tray application.
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/cli/internal/bugseti"
|
||||
"forge.lthn.ai/core/go/pkg/io/datanode"
|
||||
"github.com/Snider/Borg/pkg/tim"
|
||||
)
|
||||
|
||||
const (
|
||||
// defaultMaxWorkspaces is the fallback upper bound when config is unavailable.
|
||||
defaultMaxWorkspaces = 100
|
||||
// defaultWorkspaceTTL is the fallback TTL when config is unavailable.
|
||||
defaultWorkspaceTTL = 24 * time.Hour
|
||||
// sweepInterval is how often the background sweeper runs.
|
||||
sweepInterval = 5 * time.Minute
|
||||
)
|
||||
|
||||
// WorkspaceService manages DataNode-backed workspaces for issues.
|
||||
// Each issue gets a sandboxed in-memory filesystem that can be
|
||||
// snapshotted, packaged as a TIM container, or shipped as a crash report.
|
||||
type WorkspaceService struct {
|
||||
config *bugseti.ConfigService
|
||||
workspaces map[string]*Workspace // issue ID -> workspace
|
||||
mu sync.RWMutex
|
||||
done chan struct{} // signals the background sweeper to stop
|
||||
stopped chan struct{} // closed when the sweeper goroutine exits
|
||||
}
|
||||
|
||||
// Workspace tracks a DataNode-backed workspace for an issue.
|
||||
type Workspace struct {
|
||||
Issue *bugseti.Issue `json:"issue"`
|
||||
Medium *datanode.Medium
|
||||
DiskPath string `json:"diskPath"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Snapshots int `json:"snapshots"`
|
||||
}
|
||||
|
||||
// CrashReport contains a packaged workspace state for debugging.
|
||||
type CrashReport struct {
|
||||
IssueID string `json:"issueId"`
|
||||
Repo string `json:"repo"`
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
Error string `json:"error"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Data []byte `json:"data"` // tar snapshot
|
||||
Files int `json:"files"`
|
||||
Size int64 `json:"size"`
|
||||
}
|
||||
|
||||
// NewWorkspaceService creates a new WorkspaceService.
|
||||
// Call Start() to begin the background TTL sweeper.
|
||||
func NewWorkspaceService(config *bugseti.ConfigService) *WorkspaceService {
|
||||
return &WorkspaceService{
|
||||
config: config,
|
||||
workspaces: make(map[string]*Workspace),
|
||||
done: make(chan struct{}),
|
||||
stopped: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (w *WorkspaceService) ServiceName() string {
|
||||
return "WorkspaceService"
|
||||
}
|
||||
|
||||
// Start launches the background sweeper goroutine that periodically
|
||||
// evicts expired workspaces. This prevents unbounded map growth even
|
||||
// when no new Capture calls arrive.
|
||||
func (w *WorkspaceService) Start() {
|
||||
go func() {
|
||||
defer close(w.stopped)
|
||||
ticker := time.NewTicker(sweepInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
w.mu.Lock()
|
||||
evicted := w.cleanup()
|
||||
w.mu.Unlock()
|
||||
if evicted > 0 {
|
||||
log.Printf("Workspace sweeper: evicted %d stale entries, %d remaining", evicted, w.ActiveWorkspaces())
|
||||
}
|
||||
case <-w.done:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
log.Printf("Workspace sweeper started (interval=%s, ttl=%s, max=%d)",
|
||||
sweepInterval, w.ttl(), w.maxCap())
|
||||
}
|
||||
|
||||
// Stop signals the background sweeper to exit and waits for it to finish.
|
||||
func (w *WorkspaceService) Stop() {
|
||||
close(w.done)
|
||||
<-w.stopped
|
||||
log.Printf("Workspace sweeper stopped")
|
||||
}
|
||||
|
||||
// ttl returns the configured workspace TTL, falling back to the default.
|
||||
func (w *WorkspaceService) ttl() time.Duration {
|
||||
if w.config != nil {
|
||||
return w.config.GetWorkspaceTTL()
|
||||
}
|
||||
return defaultWorkspaceTTL
|
||||
}
|
||||
|
||||
// maxCap returns the configured max workspace count, falling back to the default.
|
||||
func (w *WorkspaceService) maxCap() int {
|
||||
if w.config != nil {
|
||||
return w.config.GetMaxWorkspaces()
|
||||
}
|
||||
return defaultMaxWorkspaces
|
||||
}
|
||||
|
||||
// Capture loads a filesystem workspace into a DataNode Medium.
|
||||
// Call this after git clone to create the in-memory snapshot.
|
||||
func (w *WorkspaceService) Capture(issue *bugseti.Issue, diskPath string) error {
|
||||
if issue == nil {
|
||||
return fmt.Errorf("issue is nil")
|
||||
}
|
||||
|
||||
m := datanode.New()
|
||||
|
||||
// Walk the filesystem and load all files into the DataNode
|
||||
err := filepath.WalkDir(diskPath, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return nil // skip errors
|
||||
}
|
||||
|
||||
// Get relative path
|
||||
rel, err := filepath.Rel(diskPath, path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if rel == "." {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Skip .git internals (keep .git marker but not the pack files)
|
||||
if rel == ".git" {
|
||||
return fs.SkipDir
|
||||
}
|
||||
|
||||
if d.IsDir() {
|
||||
return m.EnsureDir(rel)
|
||||
}
|
||||
|
||||
// Skip large files (>1MB) to keep DataNode lightweight
|
||||
info, err := d.Info()
|
||||
if err != nil || info.Size() > 1<<20 {
|
||||
return nil
|
||||
}
|
||||
|
||||
content, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return m.Write(rel, string(content))
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to capture workspace: %w", err)
|
||||
}
|
||||
|
||||
w.mu.Lock()
|
||||
w.cleanup()
|
||||
w.workspaces[issue.ID] = &Workspace{
|
||||
Issue: issue,
|
||||
Medium: m,
|
||||
DiskPath: diskPath,
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
w.mu.Unlock()
|
||||
|
||||
log.Printf("Captured workspace for issue #%d (%s)", issue.Number, issue.Repo)
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetMedium returns the DataNode Medium for an issue's workspace.
|
||||
func (w *WorkspaceService) GetMedium(issueID string) *datanode.Medium {
|
||||
w.mu.RLock()
|
||||
defer w.mu.RUnlock()
|
||||
|
||||
ws := w.workspaces[issueID]
|
||||
if ws == nil {
|
||||
return nil
|
||||
}
|
||||
return ws.Medium
|
||||
}
|
||||
|
||||
// Snapshot takes a tar snapshot of the workspace.
|
||||
func (w *WorkspaceService) Snapshot(issueID string) ([]byte, error) {
|
||||
w.mu.Lock()
|
||||
defer w.mu.Unlock()
|
||||
|
||||
ws := w.workspaces[issueID]
|
||||
if ws == nil {
|
||||
return nil, fmt.Errorf("workspace not found: %s", issueID)
|
||||
}
|
||||
|
||||
data, err := ws.Medium.Snapshot()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("snapshot failed: %w", err)
|
||||
}
|
||||
|
||||
ws.Snapshots++
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// PackageCrashReport captures the current workspace state as a crash report.
|
||||
// Re-reads from disk to get the latest state (including git changes).
|
||||
func (w *WorkspaceService) PackageCrashReport(issue *bugseti.Issue, errMsg string) (*CrashReport, error) {
|
||||
if issue == nil {
|
||||
return nil, fmt.Errorf("issue is nil")
|
||||
}
|
||||
|
||||
w.mu.RLock()
|
||||
ws := w.workspaces[issue.ID]
|
||||
w.mu.RUnlock()
|
||||
|
||||
var diskPath string
|
||||
if ws != nil {
|
||||
diskPath = ws.DiskPath
|
||||
} else {
|
||||
// Try to find the workspace on disk
|
||||
baseDir := w.config.GetWorkspaceDir()
|
||||
if baseDir == "" {
|
||||
baseDir = filepath.Join(os.TempDir(), "bugseti")
|
||||
}
|
||||
diskPath = filepath.Join(baseDir, sanitizeForPath(issue.Repo), fmt.Sprintf("issue-%d", issue.Number))
|
||||
}
|
||||
|
||||
// Re-capture from disk to get latest state
|
||||
if err := w.Capture(issue, diskPath); err != nil {
|
||||
return nil, fmt.Errorf("capture failed: %w", err)
|
||||
}
|
||||
|
||||
// Snapshot the captured workspace
|
||||
data, err := w.Snapshot(issue.ID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("snapshot failed: %w", err)
|
||||
}
|
||||
|
||||
return &CrashReport{
|
||||
IssueID: issue.ID,
|
||||
Repo: issue.Repo,
|
||||
Number: issue.Number,
|
||||
Title: issue.Title,
|
||||
Error: errMsg,
|
||||
Timestamp: time.Now(),
|
||||
Data: data,
|
||||
Size: int64(len(data)),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// PackageTIM wraps the workspace as a TIM container (runc-compatible bundle).
|
||||
// The resulting TIM can be executed via runc or encrypted to .stim for transit.
|
||||
func (w *WorkspaceService) PackageTIM(issueID string) (*tim.TerminalIsolationMatrix, error) {
|
||||
w.mu.RLock()
|
||||
ws := w.workspaces[issueID]
|
||||
w.mu.RUnlock()
|
||||
|
||||
if ws == nil {
|
||||
return nil, fmt.Errorf("workspace not found: %s", issueID)
|
||||
}
|
||||
|
||||
dn := ws.Medium.DataNode()
|
||||
return tim.FromDataNode(dn)
|
||||
}
|
||||
|
||||
// SaveCrashReport writes a crash report to the data directory.
|
||||
func (w *WorkspaceService) SaveCrashReport(report *CrashReport) (string, error) {
|
||||
dataDir := w.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
dataDir = filepath.Join(os.TempDir(), "bugseti")
|
||||
}
|
||||
|
||||
crashDir := filepath.Join(dataDir, "crash-reports")
|
||||
if err := os.MkdirAll(crashDir, 0755); err != nil {
|
||||
return "", fmt.Errorf("failed to create crash dir: %w", err)
|
||||
}
|
||||
|
||||
filename := fmt.Sprintf("crash-%s-issue-%d-%s.tar",
|
||||
sanitizeForPath(report.Repo),
|
||||
report.Number,
|
||||
report.Timestamp.Format("20060102-150405"),
|
||||
)
|
||||
path := filepath.Join(crashDir, filename)
|
||||
|
||||
if err := os.WriteFile(path, report.Data, 0644); err != nil {
|
||||
return "", fmt.Errorf("failed to write crash report: %w", err)
|
||||
}
|
||||
|
||||
log.Printf("Crash report saved: %s (%d bytes)", path, report.Size)
|
||||
return path, nil
|
||||
}
|
||||
|
||||
// cleanup evicts expired workspaces and enforces the max size cap.
|
||||
// Must be called with w.mu held for writing.
|
||||
// Returns the number of evicted entries.
|
||||
func (w *WorkspaceService) cleanup() int {
|
||||
now := time.Now()
|
||||
ttl := w.ttl()
|
||||
cap := w.maxCap()
|
||||
evicted := 0
|
||||
|
||||
// First pass: evict entries older than TTL.
|
||||
for id, ws := range w.workspaces {
|
||||
if now.Sub(ws.CreatedAt) > ttl {
|
||||
delete(w.workspaces, id)
|
||||
evicted++
|
||||
}
|
||||
}
|
||||
|
||||
// Second pass: if still over cap, evict oldest entries.
|
||||
if len(w.workspaces) > cap {
|
||||
type entry struct {
|
||||
id string
|
||||
createdAt time.Time
|
||||
}
|
||||
entries := make([]entry, 0, len(w.workspaces))
|
||||
for id, ws := range w.workspaces {
|
||||
entries = append(entries, entry{id, ws.CreatedAt})
|
||||
}
|
||||
sort.Slice(entries, func(i, j int) bool {
|
||||
return entries[i].createdAt.Before(entries[j].createdAt)
|
||||
})
|
||||
toEvict := len(w.workspaces) - cap
|
||||
for i := 0; i < toEvict; i++ {
|
||||
delete(w.workspaces, entries[i].id)
|
||||
evicted++
|
||||
}
|
||||
}
|
||||
|
||||
return evicted
|
||||
}
|
||||
|
||||
// Release removes a workspace from memory.
|
||||
func (w *WorkspaceService) Release(issueID string) {
|
||||
w.mu.Lock()
|
||||
delete(w.workspaces, issueID)
|
||||
w.mu.Unlock()
|
||||
}
|
||||
|
||||
// ActiveWorkspaces returns the count of active workspaces.
|
||||
func (w *WorkspaceService) ActiveWorkspaces() int {
|
||||
w.mu.RLock()
|
||||
defer w.mu.RUnlock()
|
||||
return len(w.workspaces)
|
||||
}
|
||||
|
||||
// sanitizeForPath converts owner/repo to a safe directory name.
|
||||
func sanitizeForPath(s string) string {
|
||||
result := make([]byte, 0, len(s))
|
||||
for _, c := range s {
|
||||
if c == '/' || c == '\\' || c == ':' {
|
||||
result = append(result, '-')
|
||||
} else {
|
||||
result = append(result, byte(c))
|
||||
}
|
||||
}
|
||||
return string(result)
|
||||
}
|
||||
|
|
@ -1,151 +0,0 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/cli/internal/bugseti"
|
||||
)
|
||||
|
||||
func TestCleanup_TTL(t *testing.T) {
|
||||
svc := NewWorkspaceService(bugseti.NewConfigService())
|
||||
|
||||
// Seed with entries that are older than TTL.
|
||||
svc.mu.Lock()
|
||||
for i := 0; i < 5; i++ {
|
||||
svc.workspaces[fmt.Sprintf("old-%d", i)] = &Workspace{
|
||||
CreatedAt: time.Now().Add(-25 * time.Hour),
|
||||
}
|
||||
}
|
||||
// Add one fresh entry.
|
||||
svc.workspaces["fresh"] = &Workspace{
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
svc.cleanup()
|
||||
svc.mu.Unlock()
|
||||
|
||||
if got := svc.ActiveWorkspaces(); got != 1 {
|
||||
t.Errorf("expected 1 workspace after TTL cleanup, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanup_MaxSize(t *testing.T) {
|
||||
svc := NewWorkspaceService(bugseti.NewConfigService())
|
||||
|
||||
maxCap := svc.maxCap()
|
||||
|
||||
// Fill beyond the cap with fresh entries.
|
||||
svc.mu.Lock()
|
||||
for i := 0; i < maxCap+20; i++ {
|
||||
svc.workspaces[fmt.Sprintf("ws-%d", i)] = &Workspace{
|
||||
CreatedAt: time.Now().Add(-time.Duration(i) * time.Minute),
|
||||
}
|
||||
}
|
||||
svc.cleanup()
|
||||
svc.mu.Unlock()
|
||||
|
||||
if got := svc.ActiveWorkspaces(); got != maxCap {
|
||||
t.Errorf("expected %d workspaces after cap cleanup, got %d", maxCap, got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanup_EvictsOldestWhenOverCap(t *testing.T) {
|
||||
svc := NewWorkspaceService(bugseti.NewConfigService())
|
||||
|
||||
maxCap := svc.maxCap()
|
||||
|
||||
// Create maxCap+1 entries; the newest should survive.
|
||||
svc.mu.Lock()
|
||||
for i := 0; i <= maxCap; i++ {
|
||||
svc.workspaces[fmt.Sprintf("ws-%d", i)] = &Workspace{
|
||||
CreatedAt: time.Now().Add(-time.Duration(maxCap-i) * time.Minute),
|
||||
}
|
||||
}
|
||||
svc.cleanup()
|
||||
svc.mu.Unlock()
|
||||
|
||||
// The newest entry (ws-<maxCap>) should still exist.
|
||||
newest := fmt.Sprintf("ws-%d", maxCap)
|
||||
|
||||
svc.mu.RLock()
|
||||
_, exists := svc.workspaces[newest]
|
||||
svc.mu.RUnlock()
|
||||
if !exists {
|
||||
t.Error("expected newest workspace to survive eviction")
|
||||
}
|
||||
|
||||
// The oldest entry (ws-0) should have been evicted.
|
||||
svc.mu.RLock()
|
||||
_, exists = svc.workspaces["ws-0"]
|
||||
svc.mu.RUnlock()
|
||||
if exists {
|
||||
t.Error("expected oldest workspace to be evicted")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanup_ReturnsEvictedCount(t *testing.T) {
|
||||
svc := NewWorkspaceService(bugseti.NewConfigService())
|
||||
|
||||
svc.mu.Lock()
|
||||
for i := 0; i < 3; i++ {
|
||||
svc.workspaces[fmt.Sprintf("old-%d", i)] = &Workspace{
|
||||
CreatedAt: time.Now().Add(-25 * time.Hour),
|
||||
}
|
||||
}
|
||||
svc.workspaces["fresh"] = &Workspace{
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
evicted := svc.cleanup()
|
||||
svc.mu.Unlock()
|
||||
|
||||
if evicted != 3 {
|
||||
t.Errorf("expected 3 evicted entries, got %d", evicted)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStartStop(t *testing.T) {
|
||||
svc := NewWorkspaceService(bugseti.NewConfigService())
|
||||
svc.Start()
|
||||
|
||||
// Add a stale entry while the sweeper is running.
|
||||
svc.mu.Lock()
|
||||
svc.workspaces["stale"] = &Workspace{
|
||||
CreatedAt: time.Now().Add(-25 * time.Hour),
|
||||
}
|
||||
svc.mu.Unlock()
|
||||
|
||||
// Stop should return without hanging.
|
||||
svc.Stop()
|
||||
}
|
||||
|
||||
func TestConfigurableTTL(t *testing.T) {
|
||||
cfg := bugseti.NewConfigService()
|
||||
svc := NewWorkspaceService(cfg)
|
||||
|
||||
// Default TTL should be 24h (1440 minutes).
|
||||
if got := svc.ttl(); got != 24*time.Hour {
|
||||
t.Errorf("expected default TTL of 24h, got %s", got)
|
||||
}
|
||||
|
||||
// Default max cap should be 100.
|
||||
if got := svc.maxCap(); got != 100 {
|
||||
t.Errorf("expected default max cap of 100, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNilConfigFallback(t *testing.T) {
|
||||
svc := &WorkspaceService{
|
||||
config: nil,
|
||||
workspaces: make(map[string]*Workspace),
|
||||
done: make(chan struct{}),
|
||||
stopped: make(chan struct{}),
|
||||
}
|
||||
|
||||
if got := svc.ttl(); got != defaultWorkspaceTTL {
|
||||
t.Errorf("expected fallback TTL %s, got %s", defaultWorkspaceTTL, got)
|
||||
}
|
||||
if got := svc.maxCap(); got != defaultMaxWorkspaces {
|
||||
t.Errorf("expected fallback max cap %d, got %d", defaultMaxWorkspaces, got)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,646 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ConfigService manages application configuration and persistence.
|
||||
type ConfigService struct {
|
||||
config *Config
|
||||
path string
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// Config holds all BugSETI configuration.
|
||||
type Config struct {
|
||||
// Authentication — Forgejo API (resolved via pkg/forge config if empty)
|
||||
ForgeURL string `json:"forgeUrl,omitempty"`
|
||||
ForgeToken string `json:"forgeToken,omitempty"`
|
||||
|
||||
// Hub coordination (agentic portal)
|
||||
HubURL string `json:"hubUrl,omitempty"`
|
||||
HubToken string `json:"hubToken,omitempty"`
|
||||
ClientID string `json:"clientId,omitempty"`
|
||||
ClientName string `json:"clientName,omitempty"`
|
||||
|
||||
// Deprecated: use ForgeToken. Kept for migration.
|
||||
GitHubToken string `json:"githubToken,omitempty"`
|
||||
|
||||
// Repositories
|
||||
WatchedRepos []string `json:"watchedRepos"`
|
||||
Labels []string `json:"labels"`
|
||||
|
||||
// Scheduling
|
||||
WorkHours *WorkHours `json:"workHours,omitempty"`
|
||||
FetchInterval int `json:"fetchIntervalMinutes"`
|
||||
|
||||
// Notifications
|
||||
NotificationsEnabled bool `json:"notificationsEnabled"`
|
||||
NotificationSound bool `json:"notificationSound"`
|
||||
|
||||
// Workspace
|
||||
WorkspaceDir string `json:"workspaceDir,omitempty"`
|
||||
DataDir string `json:"dataDir,omitempty"`
|
||||
// Marketplace MCP
|
||||
MarketplaceMCPRoot string `json:"marketplaceMcpRoot,omitempty"`
|
||||
|
||||
// Onboarding
|
||||
Onboarded bool `json:"onboarded"`
|
||||
OnboardedAt time.Time `json:"onboardedAt,omitempty"`
|
||||
|
||||
// UI Preferences
|
||||
Theme string `json:"theme"`
|
||||
ShowTrayPanel bool `json:"showTrayPanel"`
|
||||
|
||||
// Advanced
|
||||
MaxConcurrentIssues int `json:"maxConcurrentIssues"`
|
||||
AutoSeedContext bool `json:"autoSeedContext"`
|
||||
|
||||
// Workspace cache
|
||||
MaxWorkspaces int `json:"maxWorkspaces"` // Upper bound on cached workspace entries (0 = default 100)
|
||||
WorkspaceTTLMinutes int `json:"workspaceTtlMinutes"` // TTL for workspace entries in minutes (0 = default 1440 = 24h)
|
||||
|
||||
// Updates
|
||||
UpdateChannel string `json:"updateChannel"` // stable, beta, nightly
|
||||
AutoUpdate bool `json:"autoUpdate"` // Automatically install updates
|
||||
UpdateCheckInterval int `json:"updateCheckInterval"` // Check interval in hours (0 = disabled)
|
||||
LastUpdateCheck time.Time `json:"lastUpdateCheck,omitempty"`
|
||||
}
|
||||
|
||||
// WorkHours defines when BugSETI should actively fetch issues.
|
||||
type WorkHours struct {
|
||||
Enabled bool `json:"enabled"`
|
||||
StartHour int `json:"startHour"` // 0-23
|
||||
EndHour int `json:"endHour"` // 0-23
|
||||
Days []int `json:"days"` // 0=Sunday, 6=Saturday
|
||||
Timezone string `json:"timezone"`
|
||||
}
|
||||
|
||||
// NewConfigService creates a new ConfigService with default values.
|
||||
func NewConfigService() *ConfigService {
|
||||
// Determine config path
|
||||
configDir, err := os.UserConfigDir()
|
||||
if err != nil {
|
||||
configDir = filepath.Join(os.Getenv("HOME"), ".config")
|
||||
}
|
||||
|
||||
bugsetiDir := filepath.Join(configDir, "bugseti")
|
||||
if err := os.MkdirAll(bugsetiDir, 0755); err != nil {
|
||||
log.Printf("Warning: could not create config directory: %v", err)
|
||||
}
|
||||
|
||||
return &ConfigService{
|
||||
path: filepath.Join(bugsetiDir, "config.json"),
|
||||
config: &Config{
|
||||
WatchedRepos: []string{},
|
||||
Labels: []string{
|
||||
"good first issue",
|
||||
"help wanted",
|
||||
"beginner-friendly",
|
||||
},
|
||||
FetchInterval: 15,
|
||||
NotificationsEnabled: true,
|
||||
NotificationSound: true,
|
||||
Theme: "dark",
|
||||
ShowTrayPanel: true,
|
||||
MaxConcurrentIssues: 1,
|
||||
AutoSeedContext: true,
|
||||
DataDir: bugsetiDir,
|
||||
MarketplaceMCPRoot: "",
|
||||
MaxWorkspaces: 100,
|
||||
WorkspaceTTLMinutes: 1440, // 24 hours
|
||||
UpdateChannel: "stable",
|
||||
AutoUpdate: false,
|
||||
UpdateCheckInterval: 6, // Check every 6 hours
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (c *ConfigService) ServiceName() string {
|
||||
return "ConfigService"
|
||||
}
|
||||
|
||||
// Load reads the configuration from disk.
|
||||
func (c *ConfigService) Load() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
data, err := os.ReadFile(c.path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
// No config file yet, use defaults
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var config Config
|
||||
if err := json.Unmarshal(data, &config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Merge with defaults for any new fields
|
||||
c.mergeDefaults(&config)
|
||||
c.config = &config
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save persists the configuration to disk.
|
||||
func (c *ConfigService) Save() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// saveUnsafe writes config without acquiring lock.
|
||||
func (c *ConfigService) saveUnsafe() error {
|
||||
data, err := json.MarshalIndent(c.config, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.WriteFile(c.path, data, 0600)
|
||||
}
|
||||
|
||||
// mergeDefaults fills in default values for any unset fields.
|
||||
func (c *ConfigService) mergeDefaults(config *Config) {
|
||||
if config.Labels == nil || len(config.Labels) == 0 {
|
||||
config.Labels = c.config.Labels
|
||||
}
|
||||
if config.FetchInterval == 0 {
|
||||
config.FetchInterval = 15
|
||||
}
|
||||
if config.Theme == "" {
|
||||
config.Theme = "dark"
|
||||
}
|
||||
if config.MaxConcurrentIssues == 0 {
|
||||
config.MaxConcurrentIssues = 1
|
||||
}
|
||||
if config.DataDir == "" {
|
||||
config.DataDir = c.config.DataDir
|
||||
}
|
||||
if config.MaxWorkspaces == 0 {
|
||||
config.MaxWorkspaces = 100
|
||||
}
|
||||
if config.WorkspaceTTLMinutes == 0 {
|
||||
config.WorkspaceTTLMinutes = 1440
|
||||
}
|
||||
if config.UpdateChannel == "" {
|
||||
config.UpdateChannel = "stable"
|
||||
}
|
||||
if config.UpdateCheckInterval == 0 {
|
||||
config.UpdateCheckInterval = 6
|
||||
}
|
||||
}
|
||||
|
||||
// GetConfig returns a copy of the current configuration.
|
||||
func (c *ConfigService) GetConfig() Config {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return *c.config
|
||||
}
|
||||
|
||||
// GetMarketplaceMCPRoot returns the configured marketplace MCP root path.
|
||||
func (c *ConfigService) GetMarketplaceMCPRoot() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.MarketplaceMCPRoot
|
||||
}
|
||||
|
||||
// SetConfig updates the configuration and saves it.
|
||||
func (c *ConfigService) SetConfig(config Config) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config = &config
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetWatchedRepos returns the list of watched repositories.
|
||||
func (c *ConfigService) GetWatchedRepos() []string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.WatchedRepos
|
||||
}
|
||||
|
||||
// AddWatchedRepo adds a repository to the watch list.
|
||||
func (c *ConfigService) AddWatchedRepo(repo string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
for _, r := range c.config.WatchedRepos {
|
||||
if r == repo {
|
||||
return nil // Already watching
|
||||
}
|
||||
}
|
||||
|
||||
c.config.WatchedRepos = append(c.config.WatchedRepos, repo)
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// RemoveWatchedRepo removes a repository from the watch list.
|
||||
func (c *ConfigService) RemoveWatchedRepo(repo string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
for i, r := range c.config.WatchedRepos {
|
||||
if r == repo {
|
||||
c.config.WatchedRepos = append(c.config.WatchedRepos[:i], c.config.WatchedRepos[i+1:]...)
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetLabels returns the issue labels to filter by.
|
||||
func (c *ConfigService) GetLabels() []string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.Labels
|
||||
}
|
||||
|
||||
// SetLabels updates the issue labels.
|
||||
func (c *ConfigService) SetLabels(labels []string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.Labels = labels
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetFetchInterval returns the fetch interval as a duration.
|
||||
func (c *ConfigService) GetFetchInterval() time.Duration {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return time.Duration(c.config.FetchInterval) * time.Minute
|
||||
}
|
||||
|
||||
// SetFetchInterval sets the fetch interval in minutes.
|
||||
func (c *ConfigService) SetFetchInterval(minutes int) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.FetchInterval = minutes
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// IsWithinWorkHours checks if the current time is within configured work hours.
|
||||
func (c *ConfigService) IsWithinWorkHours() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
if c.config.WorkHours == nil || !c.config.WorkHours.Enabled {
|
||||
return true // No work hours restriction
|
||||
}
|
||||
|
||||
wh := c.config.WorkHours
|
||||
now := time.Now()
|
||||
|
||||
// Check timezone
|
||||
if wh.Timezone != "" {
|
||||
loc, err := time.LoadLocation(wh.Timezone)
|
||||
if err == nil {
|
||||
now = now.In(loc)
|
||||
}
|
||||
}
|
||||
|
||||
// Check day
|
||||
day := int(now.Weekday())
|
||||
dayAllowed := false
|
||||
for _, d := range wh.Days {
|
||||
if d == day {
|
||||
dayAllowed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !dayAllowed {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check hour
|
||||
hour := now.Hour()
|
||||
if wh.StartHour <= wh.EndHour {
|
||||
return hour >= wh.StartHour && hour < wh.EndHour
|
||||
}
|
||||
// Handle overnight (e.g., 22:00 - 06:00)
|
||||
return hour >= wh.StartHour || hour < wh.EndHour
|
||||
}
|
||||
|
||||
// GetWorkHours returns the work hours configuration.
|
||||
func (c *ConfigService) GetWorkHours() *WorkHours {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.WorkHours
|
||||
}
|
||||
|
||||
// SetWorkHours updates the work hours configuration.
|
||||
func (c *ConfigService) SetWorkHours(wh *WorkHours) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.WorkHours = wh
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// IsNotificationsEnabled returns whether notifications are enabled.
|
||||
func (c *ConfigService) IsNotificationsEnabled() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.NotificationsEnabled
|
||||
}
|
||||
|
||||
// SetNotificationsEnabled enables or disables notifications.
|
||||
func (c *ConfigService) SetNotificationsEnabled(enabled bool) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.NotificationsEnabled = enabled
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetWorkspaceDir returns the workspace directory.
|
||||
func (c *ConfigService) GetWorkspaceDir() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.WorkspaceDir
|
||||
}
|
||||
|
||||
// SetWorkspaceDir sets the workspace directory.
|
||||
func (c *ConfigService) SetWorkspaceDir(dir string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.WorkspaceDir = dir
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetDataDir returns the data directory.
|
||||
func (c *ConfigService) GetDataDir() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.DataDir
|
||||
}
|
||||
|
||||
// IsOnboarded returns whether the user has completed onboarding.
|
||||
func (c *ConfigService) IsOnboarded() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.Onboarded
|
||||
}
|
||||
|
||||
// CompleteOnboarding marks onboarding as complete.
|
||||
func (c *ConfigService) CompleteOnboarding() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.Onboarded = true
|
||||
c.config.OnboardedAt = time.Now()
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetTheme returns the current theme.
|
||||
func (c *ConfigService) GetTheme() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.Theme
|
||||
}
|
||||
|
||||
// SetTheme sets the theme.
|
||||
func (c *ConfigService) SetTheme(theme string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.Theme = theme
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// IsAutoSeedEnabled returns whether automatic context seeding is enabled.
|
||||
func (c *ConfigService) IsAutoSeedEnabled() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.AutoSeedContext
|
||||
}
|
||||
|
||||
// SetAutoSeedEnabled enables or disables automatic context seeding.
|
||||
func (c *ConfigService) SetAutoSeedEnabled(enabled bool) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.AutoSeedContext = enabled
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetMaxWorkspaces returns the maximum number of cached workspaces.
|
||||
func (c *ConfigService) GetMaxWorkspaces() int {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
if c.config.MaxWorkspaces <= 0 {
|
||||
return 100
|
||||
}
|
||||
return c.config.MaxWorkspaces
|
||||
}
|
||||
|
||||
// GetWorkspaceTTL returns the workspace TTL as a time.Duration.
|
||||
func (c *ConfigService) GetWorkspaceTTL() time.Duration {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
if c.config.WorkspaceTTLMinutes <= 0 {
|
||||
return 24 * time.Hour
|
||||
}
|
||||
return time.Duration(c.config.WorkspaceTTLMinutes) * time.Minute
|
||||
}
|
||||
|
||||
// UpdateSettings holds update-related configuration.
|
||||
type UpdateSettings struct {
|
||||
Channel string `json:"channel"`
|
||||
AutoUpdate bool `json:"autoUpdate"`
|
||||
CheckInterval int `json:"checkInterval"` // Hours
|
||||
LastCheck time.Time `json:"lastCheck"`
|
||||
}
|
||||
|
||||
// GetUpdateSettings returns the update settings.
|
||||
func (c *ConfigService) GetUpdateSettings() UpdateSettings {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return UpdateSettings{
|
||||
Channel: c.config.UpdateChannel,
|
||||
AutoUpdate: c.config.AutoUpdate,
|
||||
CheckInterval: c.config.UpdateCheckInterval,
|
||||
LastCheck: c.config.LastUpdateCheck,
|
||||
}
|
||||
}
|
||||
|
||||
// SetUpdateSettings updates the update settings.
|
||||
func (c *ConfigService) SetUpdateSettings(settings UpdateSettings) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.UpdateChannel = settings.Channel
|
||||
c.config.AutoUpdate = settings.AutoUpdate
|
||||
c.config.UpdateCheckInterval = settings.CheckInterval
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetUpdateChannel returns the update channel.
|
||||
func (c *ConfigService) GetUpdateChannel() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.UpdateChannel
|
||||
}
|
||||
|
||||
// SetUpdateChannel sets the update channel.
|
||||
func (c *ConfigService) SetUpdateChannel(channel string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.UpdateChannel = channel
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// IsAutoUpdateEnabled returns whether automatic updates are enabled.
|
||||
func (c *ConfigService) IsAutoUpdateEnabled() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.AutoUpdate
|
||||
}
|
||||
|
||||
// SetAutoUpdateEnabled enables or disables automatic updates.
|
||||
func (c *ConfigService) SetAutoUpdateEnabled(enabled bool) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.AutoUpdate = enabled
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetUpdateCheckInterval returns the update check interval in hours.
|
||||
func (c *ConfigService) GetUpdateCheckInterval() int {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.UpdateCheckInterval
|
||||
}
|
||||
|
||||
// SetUpdateCheckInterval sets the update check interval in hours.
|
||||
func (c *ConfigService) SetUpdateCheckInterval(hours int) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.UpdateCheckInterval = hours
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetLastUpdateCheck returns the last update check time.
|
||||
func (c *ConfigService) GetLastUpdateCheck() time.Time {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.LastUpdateCheck
|
||||
}
|
||||
|
||||
// SetLastUpdateCheck sets the last update check time.
|
||||
func (c *ConfigService) SetLastUpdateCheck(t time.Time) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.LastUpdateCheck = t
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetForgeURL returns the configured Forge URL (may be empty to use pkg/forge defaults).
|
||||
func (c *ConfigService) GetForgeURL() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.ForgeURL
|
||||
}
|
||||
|
||||
// GetForgeToken returns the configured Forge token (may be empty to use pkg/forge defaults).
|
||||
func (c *ConfigService) GetForgeToken() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.ForgeToken
|
||||
}
|
||||
|
||||
// SetForgeURL sets the Forge URL.
|
||||
func (c *ConfigService) SetForgeURL(url string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.ForgeURL = url
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// SetForgeToken sets the Forge token.
|
||||
func (c *ConfigService) SetForgeToken(token string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.ForgeToken = token
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetHubURL returns the configured Hub URL.
|
||||
func (c *ConfigService) GetHubURL() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.HubURL
|
||||
}
|
||||
|
||||
// SetHubURL sets the Hub URL.
|
||||
func (c *ConfigService) SetHubURL(url string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.HubURL = url
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetHubToken returns the configured Hub token.
|
||||
func (c *ConfigService) GetHubToken() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.HubToken
|
||||
}
|
||||
|
||||
// SetHubToken sets the Hub token.
|
||||
func (c *ConfigService) SetHubToken(token string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.HubToken = token
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetClientID returns the configured client ID.
|
||||
func (c *ConfigService) GetClientID() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.ClientID
|
||||
}
|
||||
|
||||
// SetClientID sets the client ID.
|
||||
func (c *ConfigService) SetClientID(id string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.ClientID = id
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// GetClientName returns the configured client name.
|
||||
func (c *ConfigService) GetClientName() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
return c.config.ClientName
|
||||
}
|
||||
|
||||
// SetClientName sets the client name.
|
||||
func (c *ConfigService) SetClientName(name string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.config.ClientName = name
|
||||
return c.saveUnsafe()
|
||||
}
|
||||
|
||||
// ShouldCheckForUpdates returns true if it's time to check for updates.
|
||||
func (c *ConfigService) ShouldCheckForUpdates() bool {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
if c.config.UpdateCheckInterval <= 0 {
|
||||
return false // Updates disabled
|
||||
}
|
||||
|
||||
if c.config.LastUpdateCheck.IsZero() {
|
||||
return true // Never checked
|
||||
}
|
||||
|
||||
interval := time.Duration(c.config.UpdateCheckInterval) * time.Hour
|
||||
return time.Since(c.config.LastUpdateCheck) >= interval
|
||||
}
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestConfigPermissions(t *testing.T) {
|
||||
// Get a temporary file path
|
||||
f, err := os.CreateTemp("", "bugseti-config-*.json")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
name := f.Name()
|
||||
f.Close()
|
||||
os.Remove(name) // Ensure it doesn't exist
|
||||
defer os.Remove(name)
|
||||
|
||||
c := &ConfigService{
|
||||
path: name,
|
||||
config: &Config{},
|
||||
}
|
||||
|
||||
if err := c.Save(); err != nil {
|
||||
t.Fatalf("Save failed: %v", err)
|
||||
}
|
||||
|
||||
info, err := os.Stat(name)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
mode := info.Mode().Perm()
|
||||
if mode != 0600 {
|
||||
t.Errorf("expected file permissions 0600, got %04o", mode)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,252 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/xml"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
maxEnvRunes = 512
|
||||
maxTitleRunes = 160
|
||||
maxNotificationRunes = 200
|
||||
maxSummaryRunes = 4000
|
||||
maxBodyRunes = 8000
|
||||
maxFileRunes = 260
|
||||
)
|
||||
|
||||
type EthicsGuard struct {
|
||||
Modal string
|
||||
Axioms map[string]any
|
||||
Loaded bool
|
||||
}
|
||||
|
||||
var (
|
||||
ethicsGuardMu sync.Mutex
|
||||
ethicsGuard *EthicsGuard
|
||||
ethicsGuardRoot string
|
||||
)
|
||||
|
||||
func getEthicsGuard(ctx context.Context) *EthicsGuard {
|
||||
return getEthicsGuardWithRoot(ctx, "")
|
||||
}
|
||||
|
||||
func getEthicsGuardWithRoot(ctx context.Context, rootHint string) *EthicsGuard {
|
||||
rootHint = strings.TrimSpace(rootHint)
|
||||
|
||||
ethicsGuardMu.Lock()
|
||||
defer ethicsGuardMu.Unlock()
|
||||
|
||||
if ethicsGuard != nil && ethicsGuardRoot == rootHint {
|
||||
return ethicsGuard
|
||||
}
|
||||
|
||||
guard := loadEthicsGuard(ctx, rootHint)
|
||||
if guard == nil {
|
||||
guard = &EthicsGuard{}
|
||||
}
|
||||
|
||||
ethicsGuard = guard
|
||||
ethicsGuardRoot = rootHint
|
||||
if ethicsGuard == nil {
|
||||
return &EthicsGuard{}
|
||||
}
|
||||
return ethicsGuard
|
||||
}
|
||||
|
||||
func guardFromMarketplace(ctx context.Context, client marketplaceClient) *EthicsGuard {
|
||||
if client == nil {
|
||||
return &EthicsGuard{}
|
||||
}
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancel()
|
||||
ethics, err := client.EthicsCheck(ctx)
|
||||
if err != nil || ethics == nil {
|
||||
return &EthicsGuard{}
|
||||
}
|
||||
|
||||
return &EthicsGuard{
|
||||
Modal: ethics.Modal,
|
||||
Axioms: ethics.Axioms,
|
||||
Loaded: true,
|
||||
}
|
||||
}
|
||||
|
||||
func loadEthicsGuard(ctx context.Context, rootHint string) *EthicsGuard {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||
defer cancel()
|
||||
client, err := newMarketplaceClient(ctx, rootHint)
|
||||
if err != nil {
|
||||
return &EthicsGuard{}
|
||||
}
|
||||
defer client.Close()
|
||||
|
||||
ethics, err := client.EthicsCheck(ctx)
|
||||
if err != nil || ethics == nil {
|
||||
return &EthicsGuard{}
|
||||
}
|
||||
|
||||
return &EthicsGuard{
|
||||
Modal: ethics.Modal,
|
||||
Axioms: ethics.Axioms,
|
||||
Loaded: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeEnv(value string) string {
|
||||
return stripShellMeta(sanitizeInline(value, maxEnvRunes))
|
||||
}
|
||||
|
||||
// stripShellMeta removes shell metacharacters that could allow command
|
||||
// injection when a value is interpolated inside a shell environment variable.
|
||||
func stripShellMeta(s string) string {
|
||||
var b strings.Builder
|
||||
b.Grow(len(s))
|
||||
for _, r := range s {
|
||||
switch r {
|
||||
case '`', '$', ';', '|', '&', '(', ')', '{', '}', '<', '>', '!', '\\', '\'', '"', '\n', '\r':
|
||||
continue
|
||||
default:
|
||||
b.WriteRune(r)
|
||||
}
|
||||
}
|
||||
return strings.TrimSpace(b.String())
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeTitle(value string) string {
|
||||
return sanitizeInline(value, maxTitleRunes)
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeNotification(value string) string {
|
||||
return sanitizeInline(value, maxNotificationRunes)
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeSummary(value string) string {
|
||||
return sanitizeMultiline(value, maxSummaryRunes)
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeBody(value string) string {
|
||||
return sanitizeMultiline(value, maxBodyRunes)
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeFiles(values []string) []string {
|
||||
if len(values) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
seen := make(map[string]bool)
|
||||
clean := make([]string, 0, len(values))
|
||||
for _, value := range values {
|
||||
trimmed := sanitizeInline(value, maxFileRunes)
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
if strings.Contains(trimmed, "..") {
|
||||
continue
|
||||
}
|
||||
if seen[trimmed] {
|
||||
continue
|
||||
}
|
||||
seen[trimmed] = true
|
||||
clean = append(clean, trimmed)
|
||||
}
|
||||
return clean
|
||||
}
|
||||
|
||||
func (g *EthicsGuard) SanitizeList(values []string, maxRunes int) []string {
|
||||
if len(values) == 0 {
|
||||
return nil
|
||||
}
|
||||
if maxRunes <= 0 {
|
||||
maxRunes = maxTitleRunes
|
||||
}
|
||||
clean := make([]string, 0, len(values))
|
||||
for _, value := range values {
|
||||
trimmed := sanitizeInline(value, maxRunes)
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
clean = append(clean, trimmed)
|
||||
}
|
||||
return clean
|
||||
}
|
||||
|
||||
func sanitizeInline(input string, maxRunes int) string {
|
||||
return sanitizeText(input, maxRunes, false)
|
||||
}
|
||||
|
||||
func sanitizeMultiline(input string, maxRunes int) string {
|
||||
return sanitizeText(input, maxRunes, true)
|
||||
}
|
||||
|
||||
func sanitizeText(input string, maxRunes int, allowNewlines bool) string {
|
||||
if input == "" {
|
||||
return ""
|
||||
}
|
||||
if maxRunes <= 0 {
|
||||
maxRunes = maxSummaryRunes
|
||||
}
|
||||
|
||||
var b strings.Builder
|
||||
count := 0
|
||||
for _, r := range input {
|
||||
if r == '\r' {
|
||||
continue
|
||||
}
|
||||
if r == '\n' {
|
||||
if allowNewlines {
|
||||
b.WriteRune(r)
|
||||
count++
|
||||
} else {
|
||||
b.WriteRune(' ')
|
||||
count++
|
||||
}
|
||||
if count >= maxRunes {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
if r == '\t' {
|
||||
b.WriteRune(' ')
|
||||
count++
|
||||
if count >= maxRunes {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
if r < 0x20 || r == 0x7f {
|
||||
continue
|
||||
}
|
||||
b.WriteRune(r)
|
||||
count++
|
||||
if count >= maxRunes {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return strings.TrimSpace(b.String())
|
||||
}
|
||||
|
||||
func escapeAppleScript(value string) string {
|
||||
value = strings.ReplaceAll(value, "\\", "\\\\")
|
||||
value = strings.ReplaceAll(value, "\"", "\\\"")
|
||||
return value
|
||||
}
|
||||
|
||||
func escapePowerShellXML(value string) string {
|
||||
var buffer bytes.Buffer
|
||||
_ = xml.EscapeText(&buffer, []byte(value))
|
||||
return buffer.String()
|
||||
}
|
||||
|
|
@ -1,74 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSanitizeInline_Good(t *testing.T) {
|
||||
input := "Hello world"
|
||||
output := sanitizeInline(input, 50)
|
||||
if output != input {
|
||||
t.Fatalf("expected %q, got %q", input, output)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeInline_Bad(t *testing.T) {
|
||||
input := "Hello\nworld\t\x00"
|
||||
expected := "Hello world"
|
||||
output := sanitizeInline(input, 50)
|
||||
if output != expected {
|
||||
t.Fatalf("expected %q, got %q", expected, output)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeMultiline_Ugly(t *testing.T) {
|
||||
input := "ab\ncd\tef\x00"
|
||||
output := sanitizeMultiline(input, 5)
|
||||
if output != "ab\ncd" {
|
||||
t.Fatalf("expected %q, got %q", "ab\ncd", output)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeEnv_Good(t *testing.T) {
|
||||
g := &EthicsGuard{}
|
||||
input := "owner/repo-name"
|
||||
output := g.SanitizeEnv(input)
|
||||
if output != input {
|
||||
t.Fatalf("expected %q, got %q", input, output)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeEnv_Bad(t *testing.T) {
|
||||
g := &EthicsGuard{}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
expected string
|
||||
}{
|
||||
{"backtick", "owner/repo`whoami`", "owner/repowhoami"},
|
||||
{"dollar", "owner/repo$(id)", "owner/repoid"},
|
||||
{"semicolon", "owner/repo;rm -rf /", "owner/reporm -rf /"},
|
||||
{"pipe", "owner/repo|cat /etc/passwd", "owner/repocat /etc/passwd"},
|
||||
{"ampersand", "owner/repo&&echo pwned", "owner/repoecho pwned"},
|
||||
{"mixed", "`$;|&(){}<>!\\'\"\n\r", ""},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
output := g.SanitizeEnv(tc.input)
|
||||
if output != tc.expected {
|
||||
t.Fatalf("expected %q, got %q", tc.expected, output)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStripShellMeta_Ugly(t *testing.T) {
|
||||
// All metacharacters should be stripped, leaving empty string
|
||||
input := "`$;|&(){}<>!\\'\""
|
||||
output := stripShellMeta(input)
|
||||
if output != "" {
|
||||
t.Fatalf("expected empty string, got %q", output)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,276 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/forge"
|
||||
)
|
||||
|
||||
// FetcherService fetches issues from configured OSS repositories.
|
||||
type FetcherService struct {
|
||||
config *ConfigService
|
||||
notify *NotifyService
|
||||
forge *forge.Client
|
||||
running bool
|
||||
mu sync.RWMutex
|
||||
stopCh chan struct{}
|
||||
issuesCh chan []*Issue
|
||||
}
|
||||
|
||||
// NewFetcherService creates a new FetcherService.
|
||||
func NewFetcherService(config *ConfigService, notify *NotifyService, forgeClient *forge.Client) *FetcherService {
|
||||
return &FetcherService{
|
||||
config: config,
|
||||
notify: notify,
|
||||
forge: forgeClient,
|
||||
issuesCh: make(chan []*Issue, 10),
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (f *FetcherService) ServiceName() string {
|
||||
return "FetcherService"
|
||||
}
|
||||
|
||||
// Start begins fetching issues from configured repositories.
|
||||
func (f *FetcherService) Start() error {
|
||||
f.mu.Lock()
|
||||
defer f.mu.Unlock()
|
||||
|
||||
if f.running {
|
||||
return nil
|
||||
}
|
||||
|
||||
f.running = true
|
||||
f.stopCh = make(chan struct{})
|
||||
|
||||
go f.fetchLoop()
|
||||
log.Println("FetcherService started")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Pause stops fetching issues.
|
||||
func (f *FetcherService) Pause() {
|
||||
f.mu.Lock()
|
||||
defer f.mu.Unlock()
|
||||
|
||||
if !f.running {
|
||||
return
|
||||
}
|
||||
|
||||
f.running = false
|
||||
close(f.stopCh)
|
||||
log.Println("FetcherService paused")
|
||||
}
|
||||
|
||||
// IsRunning returns whether the fetcher is actively running.
|
||||
func (f *FetcherService) IsRunning() bool {
|
||||
f.mu.RLock()
|
||||
defer f.mu.RUnlock()
|
||||
return f.running
|
||||
}
|
||||
|
||||
// Issues returns a channel that receives batches of fetched issues.
|
||||
func (f *FetcherService) Issues() <-chan []*Issue {
|
||||
return f.issuesCh
|
||||
}
|
||||
|
||||
// fetchLoop periodically fetches issues from all configured repositories.
|
||||
func (f *FetcherService) fetchLoop() {
|
||||
// Initial fetch
|
||||
f.fetchAll()
|
||||
|
||||
// Set up ticker for periodic fetching
|
||||
interval := f.config.GetFetchInterval()
|
||||
if interval < time.Minute {
|
||||
interval = 15 * time.Minute
|
||||
}
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-f.stopCh:
|
||||
return
|
||||
case <-ticker.C:
|
||||
// Check if within work hours
|
||||
if f.config.IsWithinWorkHours() {
|
||||
f.fetchAll()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fetchAll fetches issues from all configured repositories.
|
||||
func (f *FetcherService) fetchAll() {
|
||||
repos := f.config.GetWatchedRepos()
|
||||
if len(repos) == 0 {
|
||||
log.Println("No repositories configured")
|
||||
return
|
||||
}
|
||||
|
||||
var allIssues []*Issue
|
||||
for _, repo := range repos {
|
||||
issues, err := f.fetchFromRepo(repo)
|
||||
if err != nil {
|
||||
log.Printf("Error fetching from %s: %v", repo, err)
|
||||
continue
|
||||
}
|
||||
allIssues = append(allIssues, issues...)
|
||||
}
|
||||
|
||||
if len(allIssues) > 0 {
|
||||
select {
|
||||
case f.issuesCh <- allIssues:
|
||||
f.notify.Notify("BugSETI", fmt.Sprintf("Found %d new issues", len(allIssues)))
|
||||
default:
|
||||
// Channel full, skip
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fetchFromRepo fetches issues from a single repository using the Forgejo API.
|
||||
func (f *FetcherService) fetchFromRepo(repo string) ([]*Issue, error) {
|
||||
owner, repoName, err := splitRepo(repo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
labels := f.config.GetLabels()
|
||||
if len(labels) == 0 {
|
||||
labels = []string{"good first issue", "help wanted", "beginner-friendly"}
|
||||
}
|
||||
|
||||
forgeIssues, err := f.forge.ListIssues(owner, repoName, forge.ListIssuesOpts{
|
||||
State: "open",
|
||||
Labels: labels,
|
||||
Limit: 20,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("forge list issues failed: %w", err)
|
||||
}
|
||||
|
||||
issues := make([]*Issue, 0, len(forgeIssues))
|
||||
for _, fi := range forgeIssues {
|
||||
labelNames := make([]string, len(fi.Labels))
|
||||
for i, l := range fi.Labels {
|
||||
labelNames[i] = l.Name
|
||||
}
|
||||
|
||||
author := ""
|
||||
if fi.Poster != nil {
|
||||
author = fi.Poster.UserName
|
||||
}
|
||||
|
||||
issues = append(issues, &Issue{
|
||||
ID: fmt.Sprintf("%s#%d", repo, fi.Index),
|
||||
Number: int(fi.Index),
|
||||
Repo: repo,
|
||||
Title: fi.Title,
|
||||
Body: fi.Body,
|
||||
URL: fi.HTMLURL,
|
||||
Labels: labelNames,
|
||||
Author: author,
|
||||
CreatedAt: fi.Created,
|
||||
Priority: calculatePriority(labelNames),
|
||||
})
|
||||
}
|
||||
|
||||
return issues, nil
|
||||
}
|
||||
|
||||
// FetchIssue fetches a single issue by repo and number.
|
||||
func (f *FetcherService) FetchIssue(repo string, number int) (*Issue, error) {
|
||||
owner, repoName, err := splitRepo(repo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fi, err := f.forge.GetIssue(owner, repoName, int64(number))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("forge get issue failed: %w", err)
|
||||
}
|
||||
|
||||
labelNames := make([]string, len(fi.Labels))
|
||||
for i, l := range fi.Labels {
|
||||
labelNames[i] = l.Name
|
||||
}
|
||||
|
||||
author := ""
|
||||
if fi.Poster != nil {
|
||||
author = fi.Poster.UserName
|
||||
}
|
||||
|
||||
// Fetch comments
|
||||
forgeComments, err := f.forge.ListIssueComments(owner, repoName, int64(number))
|
||||
if err != nil {
|
||||
log.Printf("Warning: could not fetch comments for %s#%d: %v", repo, number, err)
|
||||
}
|
||||
|
||||
comments := make([]Comment, 0, len(forgeComments))
|
||||
for _, c := range forgeComments {
|
||||
commentAuthor := ""
|
||||
if c.Poster != nil {
|
||||
commentAuthor = c.Poster.UserName
|
||||
}
|
||||
comments = append(comments, Comment{
|
||||
Author: commentAuthor,
|
||||
Body: c.Body,
|
||||
})
|
||||
}
|
||||
|
||||
return &Issue{
|
||||
ID: fmt.Sprintf("%s#%d", repo, fi.Index),
|
||||
Number: int(fi.Index),
|
||||
Repo: repo,
|
||||
Title: fi.Title,
|
||||
Body: fi.Body,
|
||||
URL: fi.HTMLURL,
|
||||
Labels: labelNames,
|
||||
Author: author,
|
||||
CreatedAt: fi.Created,
|
||||
Priority: calculatePriority(labelNames),
|
||||
Comments: comments,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// splitRepo splits "owner/repo" into owner and repo parts.
|
||||
func splitRepo(repo string) (string, string, error) {
|
||||
parts := strings.SplitN(repo, "/", 2)
|
||||
if len(parts) != 2 {
|
||||
return "", "", fmt.Errorf("invalid repo format %q, expected owner/repo", repo)
|
||||
}
|
||||
return parts[0], parts[1], nil
|
||||
}
|
||||
|
||||
// calculatePriority assigns a priority score based on labels.
|
||||
func calculatePriority(labels []string) int {
|
||||
priority := 50 // Default priority
|
||||
|
||||
for _, label := range labels {
|
||||
lower := strings.ToLower(label)
|
||||
switch {
|
||||
case strings.Contains(lower, "good first issue"):
|
||||
priority += 30
|
||||
case strings.Contains(lower, "help wanted"):
|
||||
priority += 20
|
||||
case strings.Contains(lower, "beginner"):
|
||||
priority += 25
|
||||
case strings.Contains(lower, "easy"):
|
||||
priority += 20
|
||||
case strings.Contains(lower, "bug"):
|
||||
priority += 10
|
||||
case strings.Contains(lower, "documentation"):
|
||||
priority += 5
|
||||
case strings.Contains(lower, "priority"):
|
||||
priority += 15
|
||||
}
|
||||
}
|
||||
|
||||
return priority
|
||||
}
|
||||
|
|
@ -1,407 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// testConfigService creates a ConfigService with in-memory config for testing.
|
||||
func testConfigService(t *testing.T, repos []string, labels []string) *ConfigService {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
cs := &ConfigService{
|
||||
path: dir + "/config.json",
|
||||
config: &Config{
|
||||
WatchedRepos: repos,
|
||||
Labels: labels,
|
||||
FetchInterval: 15,
|
||||
DataDir: dir,
|
||||
},
|
||||
}
|
||||
return cs
|
||||
}
|
||||
|
||||
// TestHelperProcess is invoked by the test binary when GO_TEST_HELPER_PROCESS
|
||||
// is set. It prints the value of GO_TEST_HELPER_OUTPUT and optionally exits
|
||||
// with a non-zero code. Kept for future exec.Command mocking.
|
||||
func TestHelperProcess(t *testing.T) {
|
||||
if os.Getenv("GO_TEST_HELPER_PROCESS") != "1" {
|
||||
return
|
||||
}
|
||||
fmt.Fprint(os.Stdout, os.Getenv("GO_TEST_HELPER_OUTPUT"))
|
||||
if os.Getenv("GO_TEST_HELPER_EXIT_ERROR") == "1" {
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// ---- NewFetcherService ----
|
||||
|
||||
func TestNewFetcherService_Good(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := NewNotifyService(cfg)
|
||||
f := NewFetcherService(cfg, notify, nil)
|
||||
|
||||
require.NotNil(t, f)
|
||||
assert.Equal(t, "FetcherService", f.ServiceName())
|
||||
assert.False(t, f.IsRunning())
|
||||
assert.NotNil(t, f.Issues())
|
||||
}
|
||||
|
||||
// ---- Start / Pause / IsRunning lifecycle ----
|
||||
|
||||
func TestStartPause_Good(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := NewNotifyService(cfg)
|
||||
f := NewFetcherService(cfg, notify, nil)
|
||||
|
||||
require.NoError(t, f.Start())
|
||||
assert.True(t, f.IsRunning())
|
||||
|
||||
// Starting again is a no-op.
|
||||
require.NoError(t, f.Start())
|
||||
assert.True(t, f.IsRunning())
|
||||
|
||||
f.Pause()
|
||||
assert.False(t, f.IsRunning())
|
||||
|
||||
// Pausing again is a no-op.
|
||||
f.Pause()
|
||||
assert.False(t, f.IsRunning())
|
||||
}
|
||||
|
||||
// ---- calculatePriority ----
|
||||
|
||||
func TestCalculatePriority_Good(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
labels []string
|
||||
expected int
|
||||
}{
|
||||
{"no labels", nil, 50},
|
||||
{"good first issue", []string{"good first issue"}, 80},
|
||||
{"help wanted", []string{"Help Wanted"}, 70},
|
||||
{"beginner", []string{"beginner-friendly"}, 75},
|
||||
{"easy", []string{"Easy"}, 70},
|
||||
{"bug", []string{"bug"}, 60},
|
||||
{"documentation", []string{"Documentation"}, 55},
|
||||
{"priority", []string{"high-priority"}, 65},
|
||||
{"combined", []string{"good first issue", "bug"}, 90},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, tt.expected, calculatePriority(tt.labels))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCalculatePriority_Bad(t *testing.T) {
|
||||
// Unknown labels should not change priority from default.
|
||||
assert.Equal(t, 50, calculatePriority([]string{"unknown-label", "something-else"}))
|
||||
}
|
||||
|
||||
// ---- Label query construction ----
|
||||
|
||||
func TestLabelQuery_Good(t *testing.T) {
|
||||
// When config has custom labels, fetchFromRepo should use them.
|
||||
cfg := testConfigService(t, []string{"owner/repo"}, []string{"custom-label", "another"})
|
||||
labels := cfg.GetLabels()
|
||||
labelQuery := strings.Join(labels, ",")
|
||||
assert.Equal(t, "custom-label,another", labelQuery)
|
||||
}
|
||||
|
||||
func TestLabelQuery_Bad(t *testing.T) {
|
||||
// When config has empty labels, fetchFromRepo falls back to defaults.
|
||||
cfg := testConfigService(t, []string{"owner/repo"}, nil)
|
||||
labels := cfg.GetLabels()
|
||||
if len(labels) == 0 {
|
||||
labels = []string{"good first issue", "help wanted", "beginner-friendly"}
|
||||
}
|
||||
labelQuery := strings.Join(labels, ",")
|
||||
assert.Equal(t, "good first issue,help wanted,beginner-friendly", labelQuery)
|
||||
}
|
||||
|
||||
// ---- fetchFromRepo with mocked gh CLI output ----
|
||||
|
||||
func TestFetchFromRepo_Good(t *testing.T) {
|
||||
ghIssues := []struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
URL string `json:"url"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
Labels []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"labels"`
|
||||
}{
|
||||
{
|
||||
Number: 42,
|
||||
Title: "Fix login bug",
|
||||
Body: "The login page crashes",
|
||||
URL: "https://github.com/test/repo/issues/42",
|
||||
CreatedAt: time.Date(2026, 1, 15, 10, 0, 0, 0, time.UTC),
|
||||
},
|
||||
}
|
||||
ghIssues[0].Author.Login = "octocat"
|
||||
ghIssues[0].Labels = []struct {
|
||||
Name string `json:"name"`
|
||||
}{
|
||||
{Name: "good first issue"},
|
||||
{Name: "bug"},
|
||||
}
|
||||
|
||||
output, err := json.Marshal(ghIssues)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We can't easily intercept exec.CommandContext in the production code
|
||||
// without refactoring, so we test the JSON parsing path by directly
|
||||
// calling json.Unmarshal the same way fetchFromRepo does.
|
||||
var parsed []struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
URL string `json:"url"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
Labels []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"labels"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(output, &parsed))
|
||||
require.Len(t, parsed, 1)
|
||||
|
||||
gi := parsed[0]
|
||||
labels := make([]string, len(gi.Labels))
|
||||
for i, l := range gi.Labels {
|
||||
labels[i] = l.Name
|
||||
}
|
||||
|
||||
issue := &Issue{
|
||||
ID: fmt.Sprintf("%s#%d", "test/repo", gi.Number),
|
||||
Number: gi.Number,
|
||||
Repo: "test/repo",
|
||||
Title: gi.Title,
|
||||
Body: gi.Body,
|
||||
URL: gi.URL,
|
||||
Labels: labels,
|
||||
Author: gi.Author.Login,
|
||||
CreatedAt: gi.CreatedAt,
|
||||
Priority: calculatePriority(labels),
|
||||
}
|
||||
|
||||
assert.Equal(t, "test/repo#42", issue.ID)
|
||||
assert.Equal(t, 42, issue.Number)
|
||||
assert.Equal(t, "Fix login bug", issue.Title)
|
||||
assert.Equal(t, "octocat", issue.Author)
|
||||
assert.Equal(t, []string{"good first issue", "bug"}, issue.Labels)
|
||||
assert.Equal(t, 90, issue.Priority) // 50 + 30 (good first issue) + 10 (bug)
|
||||
}
|
||||
|
||||
func TestFetchFromRepo_Bad_InvalidJSON(t *testing.T) {
|
||||
// Simulate gh returning invalid JSON.
|
||||
var ghIssues []struct {
|
||||
Number int `json:"number"`
|
||||
}
|
||||
err := json.Unmarshal([]byte(`not json at all`), &ghIssues)
|
||||
assert.Error(t, err, "invalid JSON should produce an error")
|
||||
}
|
||||
|
||||
func TestFetchFromRepo_Bad_GhNotInstalled(t *testing.T) {
|
||||
// Verify that a missing executable produces an exec error.
|
||||
cmd := exec.Command("gh-nonexistent-binary-12345")
|
||||
_, err := cmd.Output()
|
||||
assert.Error(t, err, "missing binary should produce an error")
|
||||
}
|
||||
|
||||
// ---- fetchAll: no repos configured ----
|
||||
|
||||
func TestFetchAll_Bad_NoRepos(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := NewNotifyService(cfg)
|
||||
f := NewFetcherService(cfg, notify, nil)
|
||||
|
||||
// fetchAll with no repos should not panic and should not send to channel.
|
||||
f.fetchAll()
|
||||
|
||||
// Channel should be empty.
|
||||
select {
|
||||
case <-f.issuesCh:
|
||||
t.Fatal("expected no issues on channel when no repos configured")
|
||||
default:
|
||||
// expected
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Channel backpressure ----
|
||||
|
||||
func TestChannelBackpressure_Ugly(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := NewNotifyService(cfg)
|
||||
f := NewFetcherService(cfg, notify, nil)
|
||||
|
||||
// Fill the channel to capacity (buffer size is 10).
|
||||
for i := 0; i < 10; i++ {
|
||||
f.issuesCh <- []*Issue{{ID: fmt.Sprintf("test#%d", i)}}
|
||||
}
|
||||
|
||||
// Now try to send via the select path (same logic as fetchAll).
|
||||
// This should be a non-blocking drop, not a deadlock.
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
defer close(done)
|
||||
issues := []*Issue{{ID: "overflow#1"}}
|
||||
select {
|
||||
case f.issuesCh <- issues:
|
||||
// Shouldn't happen — channel is full.
|
||||
t.Error("expected channel send to be skipped due to backpressure")
|
||||
default:
|
||||
// This is the expected path — channel full, message dropped.
|
||||
}
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
// success — did not deadlock
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("backpressure test timed out — possible deadlock")
|
||||
}
|
||||
}
|
||||
|
||||
// ---- FetchIssue single-issue parsing ----
|
||||
|
||||
func TestFetchIssue_Good_Parse(t *testing.T) {
|
||||
// Test the JSON parsing and Issue construction for FetchIssue.
|
||||
ghIssue := struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
URL string `json:"url"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
Labels []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"labels"`
|
||||
Comments []struct {
|
||||
Body string `json:"body"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
} `json:"comments"`
|
||||
}{
|
||||
Number: 99,
|
||||
Title: "Add dark mode",
|
||||
Body: "Please add dark mode support",
|
||||
URL: "https://github.com/test/repo/issues/99",
|
||||
CreatedAt: time.Date(2026, 2, 1, 12, 0, 0, 0, time.UTC),
|
||||
}
|
||||
ghIssue.Author.Login = "contributor"
|
||||
ghIssue.Labels = []struct {
|
||||
Name string `json:"name"`
|
||||
}{
|
||||
{Name: "help wanted"},
|
||||
}
|
||||
ghIssue.Comments = []struct {
|
||||
Body string `json:"body"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
}{
|
||||
{Body: "I can work on this"},
|
||||
}
|
||||
ghIssue.Comments[0].Author.Login = "volunteer"
|
||||
|
||||
data, err := json.Marshal(ghIssue)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Re-parse as the function would.
|
||||
var parsed struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
URL string `json:"url"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
Labels []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"labels"`
|
||||
Comments []struct {
|
||||
Body string `json:"body"`
|
||||
Author struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"author"`
|
||||
} `json:"comments"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(data, &parsed))
|
||||
|
||||
labels := make([]string, len(parsed.Labels))
|
||||
for i, l := range parsed.Labels {
|
||||
labels[i] = l.Name
|
||||
}
|
||||
comments := make([]Comment, len(parsed.Comments))
|
||||
for i, c := range parsed.Comments {
|
||||
comments[i] = Comment{Author: c.Author.Login, Body: c.Body}
|
||||
}
|
||||
|
||||
issue := &Issue{
|
||||
ID: fmt.Sprintf("%s#%d", "test/repo", parsed.Number),
|
||||
Number: parsed.Number,
|
||||
Repo: "test/repo",
|
||||
Title: parsed.Title,
|
||||
Body: parsed.Body,
|
||||
URL: parsed.URL,
|
||||
Labels: labels,
|
||||
Author: parsed.Author.Login,
|
||||
CreatedAt: parsed.CreatedAt,
|
||||
Priority: calculatePriority(labels),
|
||||
Comments: comments,
|
||||
}
|
||||
|
||||
assert.Equal(t, "test/repo#99", issue.ID)
|
||||
assert.Equal(t, "contributor", issue.Author)
|
||||
assert.Equal(t, 70, issue.Priority) // 50 + 20 (help wanted)
|
||||
require.Len(t, issue.Comments, 1)
|
||||
assert.Equal(t, "volunteer", issue.Comments[0].Author)
|
||||
assert.Equal(t, "I can work on this", issue.Comments[0].Body)
|
||||
}
|
||||
|
||||
// ---- Issues() channel accessor ----
|
||||
|
||||
func TestIssuesChannel_Good(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := NewNotifyService(cfg)
|
||||
f := NewFetcherService(cfg, notify, nil)
|
||||
|
||||
ch := f.Issues()
|
||||
require.NotNil(t, ch)
|
||||
|
||||
// Send and receive through the channel.
|
||||
go func() {
|
||||
f.issuesCh <- []*Issue{{ID: "test#1", Title: "Test issue"}}
|
||||
}()
|
||||
|
||||
select {
|
||||
case issues := <-ch:
|
||||
require.Len(t, issues, 1)
|
||||
assert.Equal(t, "test#1", issues[0].ID)
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("timed out waiting for issues on channel")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"forge.lthn.ai/core/go/pkg/forge"
|
||||
)
|
||||
|
||||
// CheckForge verifies that the Forgejo API is configured and reachable.
|
||||
// Returns nil if a token is configured and the API responds, or an error
|
||||
// with actionable instructions for the user.
|
||||
func CheckForge() (*forge.Client, error) {
|
||||
client, err := forge.NewFromConfig("", "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Verify the token works by fetching the current user
|
||||
if _, err := client.GetCurrentUser(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return client, nil
|
||||
}
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCheckForge_Bad_MissingConfig(t *testing.T) {
|
||||
// Clear any env-based forge config to ensure CheckForge fails
|
||||
t.Setenv("FORGE_TOKEN", "")
|
||||
t.Setenv("FORGE_URL", "")
|
||||
|
||||
// Point HOME to a temp dir so no config file is found
|
||||
t.Setenv("HOME", t.TempDir())
|
||||
if xdg := os.Getenv("XDG_CONFIG_HOME"); xdg != "" {
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
}
|
||||
|
||||
_, err := CheckForge()
|
||||
if err == nil {
|
||||
t.Fatal("expected error when forge is not configured")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
module forge.lthn.ai/core/cli/internal/bugseti
|
||||
|
||||
go 1.25.5
|
||||
|
||||
require (
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0
|
||||
github.com/mark3labs/mcp-go v0.43.2
|
||||
github.com/stretchr/testify v1.11.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/42wim/httpsig v1.2.3 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/davidmz/go-pageant v1.0.2 // indirect
|
||||
github.com/go-fed/httpsig v1.1.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/hashicorp/go-version v1.7.0 // indirect
|
||||
github.com/invopop/jsonschema v0.13.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
github.com/spf13/cast v1.10.0 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||
golang.org/x/crypto v0.47.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
|
@ -1,39 +0,0 @@
|
|||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 h1:HTCWpzyWQOHDWt3LzI6/d2jvUDsw/vgGRWm/8BTvcqI=
|
||||
github.com/42wim/httpsig v1.2.3 h1:xb0YyWhkYj57SPtfSttIobJUPJZB9as1nsfo7KWVcEs=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davidmz/go-pageant v1.0.2 h1:bPblRCh5jGU+Uptpz6LgMZGD5hJoOt7otgT454WvHn0=
|
||||
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
|
||||
github.com/go-fed/httpsig v1.1.0 h1:9M+hb0jkEICD8/cAiNqEB66R87tTINszBRTjwjQzWcI=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
|
||||
github.com/invopop/jsonschema v0.13.0 h1:KvpoAJWEjR3uD9Kbm2HWJmqsEaHt8lBUpd0qHcIi21E=
|
||||
github.com/invopop/jsonschema v0.13.0/go.mod h1:ffZ5Km5SWWRAIN6wbDXItl95euhFz2uON45H2qjYt+0=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
|
||||
github.com/mark3labs/mcp-go v0.43.2 h1:21PUSlWWiSbUPQwXIJ5WKlETixpFpq+WBpbMGDSVy/I=
|
||||
github.com/mark3labs/mcp-go v0.43.2/go.mod h1:YnJfOL382MIWDx1kMY+2zsRHU/q78dBg9aFb8W6Thdw=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2/go.mod h1:ILOh0sOhIJR3+L/8afwt/kE++YT040gmv5BQTMR2HP4=
|
||||
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
|
@ -1,576 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/forge"
|
||||
)
|
||||
|
||||
// HubService coordinates with the agentic portal for issue assignment and leaderboard.
|
||||
type HubService struct {
|
||||
config *ConfigService
|
||||
client *http.Client
|
||||
connected bool
|
||||
pending []PendingOp
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// PendingOp represents an operation queued for retry when the hub is unreachable.
|
||||
type PendingOp struct {
|
||||
Method string `json:"method"`
|
||||
Path string `json:"path"`
|
||||
Body json.RawMessage `json:"body,omitempty"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
}
|
||||
|
||||
// HubClaim represents a claimed issue from the hub.
|
||||
type HubClaim struct {
|
||||
ID string `json:"id"`
|
||||
IssueURL string `json:"issueUrl"`
|
||||
ClientID string `json:"clientId"`
|
||||
ClaimedAt time.Time `json:"claimedAt"`
|
||||
ExpiresAt time.Time `json:"expiresAt"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
// LeaderboardEntry represents a single entry on the leaderboard.
|
||||
type LeaderboardEntry struct {
|
||||
ClientID string `json:"clientId"`
|
||||
ClientName string `json:"clientName"`
|
||||
Score int `json:"score"`
|
||||
PRsMerged int `json:"prsMerged"`
|
||||
Rank int `json:"rank"`
|
||||
}
|
||||
|
||||
// GlobalStats holds aggregate statistics from the hub.
|
||||
type GlobalStats struct {
|
||||
TotalClients int `json:"totalClients"`
|
||||
TotalClaims int `json:"totalClaims"`
|
||||
TotalPRsMerged int `json:"totalPrsMerged"`
|
||||
ActiveClaims int `json:"activeClaims"`
|
||||
IssuesAvailable int `json:"issuesAvailable"`
|
||||
}
|
||||
|
||||
// ConflictError indicates a 409 response from the hub (e.g. issue already claimed).
|
||||
type ConflictError struct {
|
||||
StatusCode int
|
||||
}
|
||||
|
||||
func (e *ConflictError) Error() string {
|
||||
return fmt.Sprintf("conflict: status %d", e.StatusCode)
|
||||
}
|
||||
|
||||
// NotFoundError indicates a 404 response from the hub.
|
||||
type NotFoundError struct {
|
||||
StatusCode int
|
||||
}
|
||||
|
||||
func (e *NotFoundError) Error() string {
|
||||
return fmt.Sprintf("not found: status %d", e.StatusCode)
|
||||
}
|
||||
|
||||
// NewHubService creates a new HubService with the given config.
|
||||
// If the config has no ClientID, one is generated and persisted.
|
||||
func NewHubService(config *ConfigService) *HubService {
|
||||
h := &HubService{
|
||||
config: config,
|
||||
client: &http.Client{
|
||||
Timeout: 10 * time.Second,
|
||||
},
|
||||
pending: make([]PendingOp, 0),
|
||||
}
|
||||
|
||||
// Generate client ID if not set.
|
||||
if config.GetClientID() == "" {
|
||||
id := generateClientID()
|
||||
_ = config.SetClientID(id)
|
||||
}
|
||||
|
||||
h.loadPendingOps()
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (h *HubService) ServiceName() string {
|
||||
return "HubService"
|
||||
}
|
||||
|
||||
// GetClientID returns the client ID from config.
|
||||
func (h *HubService) GetClientID() string {
|
||||
return h.config.GetClientID()
|
||||
}
|
||||
|
||||
// IsConnected returns whether the hub was reachable on the last request.
|
||||
func (h *HubService) IsConnected() bool {
|
||||
h.mu.RLock()
|
||||
defer h.mu.RUnlock()
|
||||
return h.connected
|
||||
}
|
||||
|
||||
// generateClientID creates a random hex string (16 bytes = 32 hex chars).
|
||||
func generateClientID() string {
|
||||
b := make([]byte, 16)
|
||||
if _, err := rand.Read(b); err != nil {
|
||||
// Fallback: this should never happen with crypto/rand.
|
||||
return fmt.Sprintf("fallback-%d", time.Now().UnixNano())
|
||||
}
|
||||
return hex.EncodeToString(b)
|
||||
}
|
||||
|
||||
// doRequest builds and executes an HTTP request against the hub API.
|
||||
// It returns the raw *http.Response and any transport-level error.
|
||||
func (h *HubService) doRequest(method, path string, body interface{}) (*http.Response, error) {
|
||||
hubURL := h.config.GetHubURL()
|
||||
if hubURL == "" {
|
||||
return nil, fmt.Errorf("hub URL not configured")
|
||||
}
|
||||
|
||||
fullURL := hubURL + "/api/bugseti" + path
|
||||
|
||||
var bodyReader io.Reader
|
||||
if body != nil {
|
||||
data, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal request body: %w", err)
|
||||
}
|
||||
bodyReader = bytes.NewReader(data)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(method, fullURL, bodyReader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("build request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Accept", "application/json")
|
||||
|
||||
token := h.config.GetHubToken()
|
||||
if token != "" {
|
||||
req.Header.Set("Authorization", "Bearer "+token)
|
||||
}
|
||||
|
||||
resp, err := h.client.Do(req)
|
||||
if err != nil {
|
||||
h.mu.Lock()
|
||||
h.connected = false
|
||||
h.mu.Unlock()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
h.mu.Lock()
|
||||
h.connected = true
|
||||
h.mu.Unlock()
|
||||
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// doJSON executes an HTTP request and decodes the JSON response into dest.
|
||||
// It handles common error status codes with typed errors.
|
||||
func (h *HubService) doJSON(method, path string, body, dest interface{}) error {
|
||||
resp, err := h.doRequest(method, path, body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
switch {
|
||||
case resp.StatusCode == http.StatusUnauthorized:
|
||||
return fmt.Errorf("unauthorised")
|
||||
case resp.StatusCode == http.StatusConflict:
|
||||
return &ConflictError{StatusCode: resp.StatusCode}
|
||||
case resp.StatusCode == http.StatusNotFound:
|
||||
return &NotFoundError{StatusCode: resp.StatusCode}
|
||||
case resp.StatusCode >= 400:
|
||||
respBody, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("hub error %d: %s", resp.StatusCode, string(respBody))
|
||||
}
|
||||
|
||||
if dest != nil {
|
||||
if err := json.NewDecoder(resp.Body).Decode(dest); err != nil {
|
||||
return fmt.Errorf("decode response: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// queueOp marshals body to JSON and appends a PendingOp to the queue.
|
||||
func (h *HubService) queueOp(method, path string, body interface{}) {
|
||||
var raw json.RawMessage
|
||||
if body != nil {
|
||||
data, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
log.Printf("BugSETI: queueOp marshal error: %v", err)
|
||||
return
|
||||
}
|
||||
raw = data
|
||||
}
|
||||
|
||||
h.mu.Lock()
|
||||
h.pending = append(h.pending, PendingOp{
|
||||
Method: method,
|
||||
Path: path,
|
||||
Body: raw,
|
||||
CreatedAt: time.Now(),
|
||||
})
|
||||
h.mu.Unlock()
|
||||
|
||||
h.savePendingOps()
|
||||
}
|
||||
|
||||
// drainPendingOps replays queued operations against the hub.
|
||||
// 5xx/transport errors are kept for retry; 4xx responses are dropped (stale).
|
||||
func (h *HubService) drainPendingOps() {
|
||||
h.mu.Lock()
|
||||
ops := h.pending
|
||||
h.pending = make([]PendingOp, 0)
|
||||
h.mu.Unlock()
|
||||
|
||||
if len(ops) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
var failed []PendingOp
|
||||
for _, op := range ops {
|
||||
var body interface{}
|
||||
if len(op.Body) > 0 {
|
||||
body = json.RawMessage(op.Body)
|
||||
}
|
||||
|
||||
resp, err := h.doRequest(op.Method, op.Path, body)
|
||||
if err != nil {
|
||||
// Transport error — keep for retry.
|
||||
failed = append(failed, op)
|
||||
continue
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
if resp.StatusCode >= 500 {
|
||||
// Server error — keep for retry.
|
||||
failed = append(failed, op)
|
||||
} // 4xx are dropped (stale).
|
||||
}
|
||||
|
||||
if len(failed) > 0 {
|
||||
h.mu.Lock()
|
||||
h.pending = append(failed, h.pending...)
|
||||
h.mu.Unlock()
|
||||
}
|
||||
|
||||
h.savePendingOps()
|
||||
}
|
||||
|
||||
// savePendingOps persists the pending operations queue to disk.
|
||||
func (h *HubService) savePendingOps() {
|
||||
dataDir := h.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
h.mu.RLock()
|
||||
data, err := json.Marshal(h.pending)
|
||||
h.mu.RUnlock()
|
||||
if err != nil {
|
||||
log.Printf("BugSETI: savePendingOps marshal error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "hub_pending.json")
|
||||
if err := os.WriteFile(path, data, 0600); err != nil {
|
||||
log.Printf("BugSETI: savePendingOps write error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// loadPendingOps loads the pending operations queue from disk.
|
||||
// Errors are silently ignored (the file may not exist yet).
|
||||
func (h *HubService) loadPendingOps() {
|
||||
dataDir := h.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "hub_pending.json")
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var ops []PendingOp
|
||||
if err := json.Unmarshal(data, &ops); err != nil {
|
||||
return
|
||||
}
|
||||
h.pending = ops
|
||||
}
|
||||
|
||||
// PendingCount returns the number of queued pending operations.
|
||||
func (h *HubService) PendingCount() int {
|
||||
h.mu.RLock()
|
||||
defer h.mu.RUnlock()
|
||||
return len(h.pending)
|
||||
}
|
||||
|
||||
// ---- Task 4: Auto-Register via Forge Token ----
|
||||
|
||||
// AutoRegister exchanges a Forge API token for a hub API key.
|
||||
// If a hub token is already configured, this is a no-op.
|
||||
func (h *HubService) AutoRegister() error {
|
||||
// Skip if already registered.
|
||||
if h.config.GetHubToken() != "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
hubURL := h.config.GetHubURL()
|
||||
if hubURL == "" {
|
||||
return fmt.Errorf("hub URL not configured")
|
||||
}
|
||||
|
||||
// Resolve forge credentials from config/env.
|
||||
forgeURL := h.config.GetForgeURL()
|
||||
forgeToken := h.config.GetForgeToken()
|
||||
if forgeToken == "" {
|
||||
resolvedURL, resolvedToken, err := forge.ResolveConfig(forgeURL, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolve forge config: %w", err)
|
||||
}
|
||||
forgeURL = resolvedURL
|
||||
forgeToken = resolvedToken
|
||||
}
|
||||
|
||||
if forgeToken == "" {
|
||||
return fmt.Errorf("no forge token available (set FORGE_TOKEN or run: core forge config --token TOKEN)")
|
||||
}
|
||||
|
||||
// Build request body.
|
||||
payload := map[string]string{
|
||||
"forge_url": forgeURL,
|
||||
"forge_token": forgeToken,
|
||||
"client_id": h.config.GetClientID(),
|
||||
}
|
||||
data, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal auto-register body: %w", err)
|
||||
}
|
||||
|
||||
// POST directly (no bearer token yet).
|
||||
resp, err := h.client.Post(hubURL+"/api/bugseti/auth/forge", "application/json", bytes.NewReader(data))
|
||||
if err != nil {
|
||||
h.mu.Lock()
|
||||
h.connected = false
|
||||
h.mu.Unlock()
|
||||
return fmt.Errorf("auto-register request: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
h.mu.Lock()
|
||||
h.connected = true
|
||||
h.mu.Unlock()
|
||||
|
||||
if resp.StatusCode >= 400 {
|
||||
respBody, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("auto-register failed %d: %s", resp.StatusCode, string(respBody))
|
||||
}
|
||||
|
||||
var result struct {
|
||||
APIKey string `json:"api_key"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
return fmt.Errorf("decode auto-register response: %w", err)
|
||||
}
|
||||
|
||||
if err := h.config.SetHubToken(result.APIKey); err != nil {
|
||||
return fmt.Errorf("cache hub token: %w", err)
|
||||
}
|
||||
|
||||
log.Printf("BugSETI: auto-registered with hub, token cached")
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---- Task 5: Write Operations ----
|
||||
|
||||
// Register registers this client with the hub.
|
||||
func (h *HubService) Register() error {
|
||||
h.drainPendingOps()
|
||||
|
||||
name := h.config.GetClientName()
|
||||
clientID := h.config.GetClientID()
|
||||
if name == "" {
|
||||
if len(clientID) >= 8 {
|
||||
name = "BugSETI-" + clientID[:8]
|
||||
} else {
|
||||
name = "BugSETI-" + clientID
|
||||
}
|
||||
}
|
||||
|
||||
body := map[string]string{
|
||||
"client_id": clientID,
|
||||
"name": name,
|
||||
"version": GetVersion(),
|
||||
"os": runtime.GOOS,
|
||||
"arch": runtime.GOARCH,
|
||||
}
|
||||
|
||||
return h.doJSON("POST", "/register", body, nil)
|
||||
}
|
||||
|
||||
// Heartbeat sends a heartbeat to the hub.
|
||||
func (h *HubService) Heartbeat() error {
|
||||
body := map[string]string{
|
||||
"client_id": h.config.GetClientID(),
|
||||
}
|
||||
return h.doJSON("POST", "/heartbeat", body, nil)
|
||||
}
|
||||
|
||||
// ClaimIssue claims an issue on the hub, returning the claim details.
|
||||
// Returns a ConflictError if the issue is already claimed by another client.
|
||||
func (h *HubService) ClaimIssue(issue *Issue) (*HubClaim, error) {
|
||||
h.drainPendingOps()
|
||||
|
||||
body := map[string]interface{}{
|
||||
"client_id": h.config.GetClientID(),
|
||||
"issue_id": issue.ID,
|
||||
"repo": issue.Repo,
|
||||
"issue_number": issue.Number,
|
||||
"title": issue.Title,
|
||||
"url": issue.URL,
|
||||
}
|
||||
|
||||
var claim HubClaim
|
||||
if err := h.doJSON("POST", "/issues/claim", body, &claim); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &claim, nil
|
||||
}
|
||||
|
||||
// UpdateStatus updates the status of a claimed issue on the hub.
|
||||
func (h *HubService) UpdateStatus(issueID, status, prURL string, prNumber int) error {
|
||||
body := map[string]interface{}{
|
||||
"client_id": h.config.GetClientID(),
|
||||
"status": status,
|
||||
}
|
||||
if prURL != "" {
|
||||
body["pr_url"] = prURL
|
||||
}
|
||||
if prNumber > 0 {
|
||||
body["pr_number"] = prNumber
|
||||
}
|
||||
|
||||
path := "/issues/" + url.PathEscape(issueID) + "/status"
|
||||
return h.doJSON("PATCH", path, body, nil)
|
||||
}
|
||||
|
||||
// ReleaseClaim releases a previously claimed issue back to the pool.
|
||||
func (h *HubService) ReleaseClaim(issueID string) error {
|
||||
body := map[string]string{
|
||||
"client_id": h.config.GetClientID(),
|
||||
}
|
||||
|
||||
path := "/issues/" + url.PathEscape(issueID) + "/claim"
|
||||
return h.doJSON("DELETE", path, body, nil)
|
||||
}
|
||||
|
||||
// SyncStats uploads local statistics to the hub.
|
||||
func (h *HubService) SyncStats(stats *Stats) error {
|
||||
// Build repos_contributed as a flat string slice from the map keys.
|
||||
repos := make([]string, 0, len(stats.ReposContributed))
|
||||
for k := range stats.ReposContributed {
|
||||
repos = append(repos, k)
|
||||
}
|
||||
|
||||
body := map[string]interface{}{
|
||||
"client_id": h.config.GetClientID(),
|
||||
"stats": map[string]interface{}{
|
||||
"issues_attempted": stats.IssuesAttempted,
|
||||
"issues_completed": stats.IssuesCompleted,
|
||||
"issues_skipped": stats.IssuesSkipped,
|
||||
"prs_submitted": stats.PRsSubmitted,
|
||||
"prs_merged": stats.PRsMerged,
|
||||
"prs_rejected": stats.PRsRejected,
|
||||
"current_streak": stats.CurrentStreak,
|
||||
"longest_streak": stats.LongestStreak,
|
||||
"total_time_minutes": int(stats.TotalTimeSpent.Minutes()),
|
||||
"repos_contributed": repos,
|
||||
},
|
||||
}
|
||||
|
||||
return h.doJSON("POST", "/stats/sync", body, nil)
|
||||
}
|
||||
|
||||
// ---- Task 6: Read Operations ----
|
||||
|
||||
// IsIssueClaimed checks whether an issue is currently claimed on the hub.
|
||||
// Returns the claim if it exists, or (nil, nil) if the issue is not claimed (404).
|
||||
func (h *HubService) IsIssueClaimed(issueID string) (*HubClaim, error) {
|
||||
path := "/issues/" + url.PathEscape(issueID)
|
||||
|
||||
var claim HubClaim
|
||||
if err := h.doJSON("GET", path, nil, &claim); err != nil {
|
||||
if _, ok := err.(*NotFoundError); ok {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return &claim, nil
|
||||
}
|
||||
|
||||
// ListClaims returns claimed issues, optionally filtered by status and/or repo.
|
||||
func (h *HubService) ListClaims(status, repo string) ([]*HubClaim, error) {
|
||||
params := url.Values{}
|
||||
if status != "" {
|
||||
params.Set("status", status)
|
||||
}
|
||||
if repo != "" {
|
||||
params.Set("repo", repo)
|
||||
}
|
||||
|
||||
path := "/issues/claimed"
|
||||
if encoded := params.Encode(); encoded != "" {
|
||||
path += "?" + encoded
|
||||
}
|
||||
|
||||
var claims []*HubClaim
|
||||
if err := h.doJSON("GET", path, nil, &claims); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return claims, nil
|
||||
}
|
||||
|
||||
// leaderboardResponse wraps the hub leaderboard JSON envelope.
|
||||
type leaderboardResponse struct {
|
||||
Entries []LeaderboardEntry `json:"entries"`
|
||||
TotalParticipants int `json:"totalParticipants"`
|
||||
}
|
||||
|
||||
// GetLeaderboard fetches the top N leaderboard entries from the hub.
|
||||
func (h *HubService) GetLeaderboard(limit int) ([]LeaderboardEntry, int, error) {
|
||||
path := fmt.Sprintf("/leaderboard?limit=%d", limit)
|
||||
|
||||
var resp leaderboardResponse
|
||||
if err := h.doJSON("GET", path, nil, &resp); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return resp.Entries, resp.TotalParticipants, nil
|
||||
}
|
||||
|
||||
// GetGlobalStats fetches aggregate statistics from the hub.
|
||||
func (h *HubService) GetGlobalStats() (*GlobalStats, error) {
|
||||
var stats GlobalStats
|
||||
if err := h.doJSON("GET", "/stats", nil, &stats); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &stats, nil
|
||||
}
|
||||
|
|
@ -1,558 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func testHubService(t *testing.T, serverURL string) *HubService {
|
||||
t.Helper()
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
if serverURL != "" {
|
||||
cfg.config.HubURL = serverURL
|
||||
}
|
||||
return NewHubService(cfg)
|
||||
}
|
||||
|
||||
// ---- NewHubService ----
|
||||
|
||||
func TestNewHubService_Good(t *testing.T) {
|
||||
h := testHubService(t, "")
|
||||
require.NotNil(t, h)
|
||||
assert.NotNil(t, h.config)
|
||||
assert.NotNil(t, h.client)
|
||||
assert.False(t, h.IsConnected())
|
||||
}
|
||||
|
||||
func TestHubServiceName_Good(t *testing.T) {
|
||||
h := testHubService(t, "")
|
||||
assert.Equal(t, "HubService", h.ServiceName())
|
||||
}
|
||||
|
||||
func TestNewHubService_Good_GeneratesClientID(t *testing.T) {
|
||||
h := testHubService(t, "")
|
||||
id := h.GetClientID()
|
||||
assert.NotEmpty(t, id)
|
||||
// 16 bytes = 32 hex characters
|
||||
assert.Len(t, id, 32)
|
||||
}
|
||||
|
||||
func TestNewHubService_Good_ReusesClientID(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.ClientID = "existing-client-id"
|
||||
|
||||
h := NewHubService(cfg)
|
||||
assert.Equal(t, "existing-client-id", h.GetClientID())
|
||||
}
|
||||
|
||||
// ---- doRequest ----
|
||||
|
||||
func TestDoRequest_Good(t *testing.T) {
|
||||
var gotAuth string
|
||||
var gotContentType string
|
||||
var gotAccept string
|
||||
var gotBody map[string]string
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gotAuth = r.Header.Get("Authorization")
|
||||
gotContentType = r.Header.Get("Content-Type")
|
||||
gotAccept = r.Header.Get("Accept")
|
||||
|
||||
if r.Body != nil {
|
||||
_ = json.NewDecoder(r.Body).Decode(&gotBody)
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte(`{"ok":true}`))
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "test-token-123"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
body := map[string]string{"key": "value"}
|
||||
resp, err := h.doRequest("POST", "/test", body)
|
||||
require.NoError(t, err)
|
||||
defer resp.Body.Close()
|
||||
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
assert.Equal(t, "Bearer test-token-123", gotAuth)
|
||||
assert.Equal(t, "application/json", gotContentType)
|
||||
assert.Equal(t, "application/json", gotAccept)
|
||||
assert.Equal(t, "value", gotBody["key"])
|
||||
assert.True(t, h.IsConnected())
|
||||
}
|
||||
|
||||
func TestDoRequest_Bad_NoHubURL(t *testing.T) {
|
||||
h := testHubService(t, "")
|
||||
|
||||
resp, err := h.doRequest("GET", "/test", nil)
|
||||
assert.Nil(t, resp)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "hub URL not configured")
|
||||
}
|
||||
|
||||
func TestDoRequest_Bad_NetworkError(t *testing.T) {
|
||||
// Point to a port where nothing is listening.
|
||||
h := testHubService(t, "http://127.0.0.1:1")
|
||||
|
||||
resp, err := h.doRequest("GET", "/test", nil)
|
||||
assert.Nil(t, resp)
|
||||
assert.Error(t, err)
|
||||
assert.False(t, h.IsConnected())
|
||||
}
|
||||
|
||||
// ---- Task 4: AutoRegister ----
|
||||
|
||||
func TestAutoRegister_Good(t *testing.T) {
|
||||
var gotBody map[string]string
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/auth/forge", r.URL.Path)
|
||||
assert.Equal(t, "POST", r.Method)
|
||||
|
||||
_ = json.NewDecoder(r.Body).Decode(&gotBody)
|
||||
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
_, _ = w.Write([]byte(`{"api_key":"ak_test_12345"}`))
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.ForgeURL = "https://forge.example.com"
|
||||
cfg.config.ForgeToken = "forge-tok-abc"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.AutoRegister()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify token was cached.
|
||||
assert.Equal(t, "ak_test_12345", h.config.GetHubToken())
|
||||
|
||||
// Verify request body.
|
||||
assert.Equal(t, "https://forge.example.com", gotBody["forge_url"])
|
||||
assert.Equal(t, "forge-tok-abc", gotBody["forge_token"])
|
||||
assert.NotEmpty(t, gotBody["client_id"])
|
||||
}
|
||||
|
||||
func TestAutoRegister_Bad_NoForgeToken(t *testing.T) {
|
||||
// Isolate from user's real ~/.core/config.yaml and env vars.
|
||||
origHome := os.Getenv("HOME")
|
||||
t.Setenv("HOME", t.TempDir())
|
||||
t.Setenv("FORGE_TOKEN", "")
|
||||
t.Setenv("FORGE_URL", "")
|
||||
defer os.Setenv("HOME", origHome)
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = "https://hub.example.com"
|
||||
// No forge token set, and env/config are empty in test.
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.AutoRegister()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "no forge token available")
|
||||
}
|
||||
|
||||
func TestAutoRegister_Good_SkipsIfAlreadyRegistered(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = "https://hub.example.com"
|
||||
cfg.config.HubToken = "existing-token"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.AutoRegister()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Token should remain unchanged.
|
||||
assert.Equal(t, "existing-token", h.config.GetHubToken())
|
||||
}
|
||||
|
||||
// ---- Task 5: Write Operations ----
|
||||
|
||||
func TestRegister_Good(t *testing.T) {
|
||||
var gotPath string
|
||||
var gotMethod string
|
||||
var gotBody map[string]string
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gotPath = r.URL.Path
|
||||
gotMethod = r.Method
|
||||
_ = json.NewDecoder(r.Body).Decode(&gotBody)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
cfg.config.ClientName = "MyBugSETI"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.Register()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "/api/bugseti/register", gotPath)
|
||||
assert.Equal(t, "POST", gotMethod)
|
||||
assert.Equal(t, "MyBugSETI", gotBody["name"])
|
||||
assert.NotEmpty(t, gotBody["client_id"])
|
||||
assert.NotEmpty(t, gotBody["version"])
|
||||
assert.NotEmpty(t, gotBody["os"])
|
||||
assert.NotEmpty(t, gotBody["arch"])
|
||||
}
|
||||
|
||||
func TestHeartbeat_Good(t *testing.T) {
|
||||
var gotPath string
|
||||
var gotMethod string
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gotPath = r.URL.Path
|
||||
gotMethod = r.Method
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.Heartbeat()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "/api/bugseti/heartbeat", gotPath)
|
||||
assert.Equal(t, "POST", gotMethod)
|
||||
}
|
||||
|
||||
func TestClaimIssue_Good(t *testing.T) {
|
||||
now := time.Now().Truncate(time.Second)
|
||||
expires := now.Add(30 * time.Minute)
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/issues/claim", r.URL.Path)
|
||||
assert.Equal(t, "POST", r.Method)
|
||||
|
||||
var body map[string]interface{}
|
||||
_ = json.NewDecoder(r.Body).Decode(&body)
|
||||
assert.Equal(t, "issue-42", body["issue_id"])
|
||||
assert.Equal(t, "org/repo", body["repo"])
|
||||
assert.Equal(t, float64(42), body["issue_number"])
|
||||
assert.Equal(t, "Fix the bug", body["title"])
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
resp := HubClaim{
|
||||
ID: "claim-1",
|
||||
IssueURL: "https://github.com/org/repo/issues/42",
|
||||
ClientID: "test",
|
||||
ClaimedAt: now,
|
||||
ExpiresAt: expires,
|
||||
Status: "claimed",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(resp)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
issue := &Issue{
|
||||
ID: "issue-42",
|
||||
Number: 42,
|
||||
Repo: "org/repo",
|
||||
Title: "Fix the bug",
|
||||
URL: "https://github.com/org/repo/issues/42",
|
||||
}
|
||||
|
||||
claim, err := h.ClaimIssue(issue)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, claim)
|
||||
assert.Equal(t, "claim-1", claim.ID)
|
||||
assert.Equal(t, "claimed", claim.Status)
|
||||
}
|
||||
|
||||
func TestClaimIssue_Bad_Conflict(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusConflict)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
issue := &Issue{ID: "issue-99", Number: 99, Repo: "org/repo", Title: "Already claimed"}
|
||||
|
||||
claim, err := h.ClaimIssue(issue)
|
||||
assert.Nil(t, claim)
|
||||
require.Error(t, err)
|
||||
|
||||
var conflictErr *ConflictError
|
||||
assert.ErrorAs(t, err, &conflictErr)
|
||||
}
|
||||
|
||||
func TestUpdateStatus_Good(t *testing.T) {
|
||||
var gotPath string
|
||||
var gotMethod string
|
||||
var gotBody map[string]interface{}
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gotPath = r.URL.Path
|
||||
gotMethod = r.Method
|
||||
_ = json.NewDecoder(r.Body).Decode(&gotBody)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
err := h.UpdateStatus("issue-42", "completed", "https://github.com/org/repo/pull/10", 10)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "PATCH", gotMethod)
|
||||
assert.Equal(t, "/api/bugseti/issues/issue-42/status", gotPath)
|
||||
assert.Equal(t, "completed", gotBody["status"])
|
||||
assert.Equal(t, "https://github.com/org/repo/pull/10", gotBody["pr_url"])
|
||||
assert.Equal(t, float64(10), gotBody["pr_number"])
|
||||
}
|
||||
|
||||
func TestSyncStats_Good(t *testing.T) {
|
||||
var gotBody map[string]interface{}
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/stats/sync", r.URL.Path)
|
||||
assert.Equal(t, "POST", r.Method)
|
||||
_ = json.NewDecoder(r.Body).Decode(&gotBody)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
stats := &Stats{
|
||||
IssuesAttempted: 10,
|
||||
IssuesCompleted: 7,
|
||||
IssuesSkipped: 3,
|
||||
PRsSubmitted: 6,
|
||||
PRsMerged: 5,
|
||||
PRsRejected: 1,
|
||||
CurrentStreak: 3,
|
||||
LongestStreak: 5,
|
||||
TotalTimeSpent: 90 * time.Minute,
|
||||
ReposContributed: map[string]*RepoStats{
|
||||
"org/repo-a": {Name: "org/repo-a"},
|
||||
"org/repo-b": {Name: "org/repo-b"},
|
||||
},
|
||||
}
|
||||
|
||||
err := h.SyncStats(stats)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEmpty(t, gotBody["client_id"])
|
||||
statsMap, ok := gotBody["stats"].(map[string]interface{})
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, float64(10), statsMap["issues_attempted"])
|
||||
assert.Equal(t, float64(7), statsMap["issues_completed"])
|
||||
assert.Equal(t, float64(3), statsMap["issues_skipped"])
|
||||
assert.Equal(t, float64(6), statsMap["prs_submitted"])
|
||||
assert.Equal(t, float64(5), statsMap["prs_merged"])
|
||||
assert.Equal(t, float64(1), statsMap["prs_rejected"])
|
||||
assert.Equal(t, float64(3), statsMap["current_streak"])
|
||||
assert.Equal(t, float64(5), statsMap["longest_streak"])
|
||||
assert.Equal(t, float64(90), statsMap["total_time_minutes"])
|
||||
|
||||
reposRaw, ok := statsMap["repos_contributed"].([]interface{})
|
||||
require.True(t, ok)
|
||||
assert.Len(t, reposRaw, 2)
|
||||
}
|
||||
|
||||
// ---- Task 6: Read Operations ----
|
||||
|
||||
func TestIsIssueClaimed_Good_Claimed(t *testing.T) {
|
||||
now := time.Now().Truncate(time.Second)
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/issues/issue-42", r.URL.Path)
|
||||
assert.Equal(t, "GET", r.Method)
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
claim := HubClaim{
|
||||
ID: "claim-1",
|
||||
IssueURL: "https://github.com/org/repo/issues/42",
|
||||
ClientID: "client-abc",
|
||||
ClaimedAt: now,
|
||||
Status: "claimed",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(claim)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
claim, err := h.IsIssueClaimed("issue-42")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, claim)
|
||||
assert.Equal(t, "claim-1", claim.ID)
|
||||
assert.Equal(t, "claimed", claim.Status)
|
||||
}
|
||||
|
||||
func TestIsIssueClaimed_Good_NotClaimed(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
claim, err := h.IsIssueClaimed("issue-999")
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, claim)
|
||||
}
|
||||
|
||||
func TestGetLeaderboard_Good(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/leaderboard", r.URL.Path)
|
||||
assert.Equal(t, "GET", r.Method)
|
||||
assert.Equal(t, "10", r.URL.Query().Get("limit"))
|
||||
|
||||
resp := leaderboardResponse{
|
||||
Entries: []LeaderboardEntry{
|
||||
{ClientID: "a", ClientName: "Alice", Score: 100, PRsMerged: 10, Rank: 1},
|
||||
{ClientID: "b", ClientName: "Bob", Score: 80, PRsMerged: 8, Rank: 2},
|
||||
},
|
||||
TotalParticipants: 42,
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_ = json.NewEncoder(w).Encode(resp)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
entries, total, err := h.GetLeaderboard(10)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 42, total)
|
||||
require.Len(t, entries, 2)
|
||||
assert.Equal(t, "Alice", entries[0].ClientName)
|
||||
assert.Equal(t, 1, entries[0].Rank)
|
||||
assert.Equal(t, "Bob", entries[1].ClientName)
|
||||
}
|
||||
|
||||
func TestGetGlobalStats_Good(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/bugseti/stats", r.URL.Path)
|
||||
assert.Equal(t, "GET", r.Method)
|
||||
|
||||
stats := GlobalStats{
|
||||
TotalClients: 100,
|
||||
TotalClaims: 500,
|
||||
TotalPRsMerged: 300,
|
||||
ActiveClaims: 25,
|
||||
IssuesAvailable: 150,
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_ = json.NewEncoder(w).Encode(stats)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
stats, err := h.GetGlobalStats()
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, stats)
|
||||
assert.Equal(t, 100, stats.TotalClients)
|
||||
assert.Equal(t, 500, stats.TotalClaims)
|
||||
assert.Equal(t, 300, stats.TotalPRsMerged)
|
||||
assert.Equal(t, 25, stats.ActiveClaims)
|
||||
assert.Equal(t, 150, stats.IssuesAvailable)
|
||||
}
|
||||
|
||||
// ---- Task 7: Pending Operations Queue ----
|
||||
|
||||
func TestPendingOps_Good_QueueAndDrain(t *testing.T) {
|
||||
var callCount int32
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
callCount++
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = srv.URL
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
// Manually queue a pending op (simulates a previous failed request).
|
||||
h.queueOp("POST", "/heartbeat", map[string]string{"client_id": "test"})
|
||||
assert.Equal(t, 1, h.PendingCount())
|
||||
|
||||
// Register() calls drainPendingOps() first, then sends its own request.
|
||||
err := h.Register()
|
||||
require.NoError(t, err)
|
||||
|
||||
// At least 2 calls: 1 from drain (the queued heartbeat) + 1 from Register itself.
|
||||
assert.GreaterOrEqual(t, callCount, int32(2))
|
||||
assert.Equal(t, 0, h.PendingCount())
|
||||
}
|
||||
|
||||
func TestPendingOps_Good_PersistAndLoad(t *testing.T) {
|
||||
cfg1 := testConfigService(t, nil, nil)
|
||||
cfg1.config.HubURL = "https://hub.example.com"
|
||||
cfg1.config.HubToken = "tok"
|
||||
h1 := NewHubService(cfg1)
|
||||
|
||||
// Queue an op — this also calls savePendingOps.
|
||||
h1.queueOp("POST", "/heartbeat", map[string]string{"client_id": "test"})
|
||||
assert.Equal(t, 1, h1.PendingCount())
|
||||
|
||||
// Create a second HubService with the same data dir.
|
||||
// NewHubService calls loadPendingOps() in its constructor.
|
||||
cfg2 := testConfigService(t, nil, nil)
|
||||
cfg2.config.DataDir = cfg1.config.DataDir // Share the same data dir.
|
||||
cfg2.config.HubURL = "https://hub.example.com"
|
||||
cfg2.config.HubToken = "tok"
|
||||
h2 := NewHubService(cfg2)
|
||||
|
||||
assert.Equal(t, 1, h2.PendingCount())
|
||||
}
|
||||
|
||||
func TestPendingCount_Good(t *testing.T) {
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
cfg.config.HubURL = "https://hub.example.com"
|
||||
cfg.config.HubToken = "tok"
|
||||
h := NewHubService(cfg)
|
||||
|
||||
assert.Equal(t, 0, h.PendingCount())
|
||||
|
||||
h.queueOp("POST", "/test1", nil)
|
||||
assert.Equal(t, 1, h.PendingCount())
|
||||
|
||||
h.queueOp("POST", "/test2", map[string]string{"key": "val"})
|
||||
assert.Equal(t, 2, h.PendingCount())
|
||||
}
|
||||
|
|
@ -1,246 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/mark3labs/mcp-go/client"
|
||||
"github.com/mark3labs/mcp-go/mcp"
|
||||
)
|
||||
|
||||
type Marketplace struct {
|
||||
Schema string `json:"$schema,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Owner MarketplaceOwner `json:"owner"`
|
||||
Plugins []MarketplacePlugin `json:"plugins"`
|
||||
}
|
||||
|
||||
type MarketplaceOwner struct {
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email"`
|
||||
}
|
||||
|
||||
type MarketplacePlugin struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Version string `json:"version"`
|
||||
Source string `json:"source"`
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
type PluginInfo struct {
|
||||
Plugin MarketplacePlugin `json:"plugin"`
|
||||
Path string `json:"path"`
|
||||
Manifest map[string]any `json:"manifest,omitempty"`
|
||||
Commands []string `json:"commands,omitempty"`
|
||||
Skills []string `json:"skills,omitempty"`
|
||||
}
|
||||
|
||||
type EthicsContext struct {
|
||||
Modal string `json:"modal"`
|
||||
Axioms map[string]any `json:"axioms"`
|
||||
}
|
||||
|
||||
type marketplaceClient interface {
|
||||
ListMarketplace(ctx context.Context) ([]MarketplacePlugin, error)
|
||||
PluginInfo(ctx context.Context, name string) (*PluginInfo, error)
|
||||
EthicsCheck(ctx context.Context) (*EthicsContext, error)
|
||||
Close() error
|
||||
}
|
||||
|
||||
type mcpMarketplaceClient struct {
|
||||
client *client.Client
|
||||
}
|
||||
|
||||
func newMarketplaceClient(ctx context.Context, rootHint string) (marketplaceClient, error) {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
command, args, err := resolveMarketplaceCommand(rootHint)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mcpClient, err := client.NewStdioMCPClient(command, nil, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to start marketplace MCP client: %w", err)
|
||||
}
|
||||
|
||||
initRequest := mcp.InitializeRequest{}
|
||||
initRequest.Params.ProtocolVersion = mcp.LATEST_PROTOCOL_VERSION
|
||||
initRequest.Params.ClientInfo = mcp.Implementation{
|
||||
Name: "bugseti",
|
||||
Version: GetVersion(),
|
||||
}
|
||||
|
||||
initCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
if _, err := mcpClient.Initialize(initCtx, initRequest); err != nil {
|
||||
_ = mcpClient.Close()
|
||||
return nil, fmt.Errorf("failed to initialize marketplace MCP client: %w", err)
|
||||
}
|
||||
|
||||
return &mcpMarketplaceClient{client: mcpClient}, nil
|
||||
}
|
||||
|
||||
func (c *mcpMarketplaceClient) Close() error {
|
||||
if c == nil || c.client == nil {
|
||||
return nil
|
||||
}
|
||||
return c.client.Close()
|
||||
}
|
||||
|
||||
func (c *mcpMarketplaceClient) ListMarketplace(ctx context.Context) ([]MarketplacePlugin, error) {
|
||||
var marketplace Marketplace
|
||||
if err := c.callToolStructured(ctx, "marketplace_list", nil, &marketplace); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return marketplace.Plugins, nil
|
||||
}
|
||||
|
||||
func (c *mcpMarketplaceClient) PluginInfo(ctx context.Context, name string) (*PluginInfo, error) {
|
||||
var info PluginInfo
|
||||
args := map[string]any{"name": name}
|
||||
if err := c.callToolStructured(ctx, "marketplace_plugin_info", args, &info); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &info, nil
|
||||
}
|
||||
|
||||
func (c *mcpMarketplaceClient) EthicsCheck(ctx context.Context) (*EthicsContext, error) {
|
||||
var ethics EthicsContext
|
||||
if err := c.callToolStructured(ctx, "ethics_check", nil, ðics); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ðics, nil
|
||||
}
|
||||
|
||||
func (c *mcpMarketplaceClient) callToolStructured(ctx context.Context, name string, args map[string]any, target any) error {
|
||||
if c == nil || c.client == nil {
|
||||
return errors.New("marketplace client is not initialized")
|
||||
}
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
request := mcp.CallToolRequest{}
|
||||
request.Params.Name = name
|
||||
if args != nil {
|
||||
request.Params.Arguments = args
|
||||
}
|
||||
|
||||
result, err := c.client.CallTool(ctx, request)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if result == nil {
|
||||
return errors.New("marketplace tool returned no result")
|
||||
}
|
||||
if result.IsError {
|
||||
return fmt.Errorf("marketplace tool %s error: %s", name, toolResultMessage(result))
|
||||
}
|
||||
if result.StructuredContent == nil {
|
||||
return fmt.Errorf("marketplace tool %s returned no structured content", name)
|
||||
}
|
||||
payload, err := json.Marshal(result.StructuredContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode marketplace response: %w", err)
|
||||
}
|
||||
if err := json.Unmarshal(payload, target); err != nil {
|
||||
return fmt.Errorf("failed to decode marketplace response: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func toolResultMessage(result *mcp.CallToolResult) string {
|
||||
if result == nil {
|
||||
return "unknown error"
|
||||
}
|
||||
for _, content := range result.Content {
|
||||
switch value := content.(type) {
|
||||
case mcp.TextContent:
|
||||
if value.Text != "" {
|
||||
return value.Text
|
||||
}
|
||||
case *mcp.TextContent:
|
||||
if value != nil && value.Text != "" {
|
||||
return value.Text
|
||||
}
|
||||
}
|
||||
}
|
||||
return "unknown error"
|
||||
}
|
||||
|
||||
func resolveMarketplaceCommand(rootHint string) (string, []string, error) {
|
||||
if command := strings.TrimSpace(os.Getenv("BUGSETI_MCP_COMMAND")); command != "" {
|
||||
args := strings.Fields(os.Getenv("BUGSETI_MCP_ARGS"))
|
||||
return command, args, nil
|
||||
}
|
||||
|
||||
if root := strings.TrimSpace(rootHint); root != "" {
|
||||
path := filepath.Join(root, "mcp")
|
||||
return "go", []string{"run", path}, nil
|
||||
}
|
||||
|
||||
if root := strings.TrimSpace(os.Getenv("BUGSETI_MCP_ROOT")); root != "" {
|
||||
path := filepath.Join(root, "mcp")
|
||||
return "go", []string{"run", path}, nil
|
||||
}
|
||||
|
||||
if root, ok := findCoreAgentRoot(); ok {
|
||||
return "go", []string{"run", filepath.Join(root, "mcp")}, nil
|
||||
}
|
||||
|
||||
return "", nil, fmt.Errorf("marketplace MCP server not configured (set BUGSETI_MCP_COMMAND or BUGSETI_MCP_ROOT)")
|
||||
}
|
||||
|
||||
func findCoreAgentRoot() (string, bool) {
|
||||
var candidates []string
|
||||
if cwd, err := os.Getwd(); err == nil {
|
||||
candidates = append(candidates, cwd)
|
||||
candidates = append(candidates, filepath.Dir(cwd))
|
||||
}
|
||||
if exe, err := os.Executable(); err == nil {
|
||||
exeDir := filepath.Dir(exe)
|
||||
candidates = append(candidates, exeDir)
|
||||
candidates = append(candidates, filepath.Dir(exeDir))
|
||||
}
|
||||
|
||||
seen := make(map[string]bool)
|
||||
for _, base := range candidates {
|
||||
base = filepath.Clean(base)
|
||||
if seen[base] {
|
||||
continue
|
||||
}
|
||||
seen[base] = true
|
||||
|
||||
root := filepath.Join(base, "core-agent")
|
||||
if hasMcpDir(root) {
|
||||
return root, true
|
||||
}
|
||||
|
||||
root = filepath.Join(base, "..", "core-agent")
|
||||
if hasMcpDir(root) {
|
||||
return filepath.Clean(root), true
|
||||
}
|
||||
}
|
||||
|
||||
return "", false
|
||||
}
|
||||
|
||||
func hasMcpDir(root string) bool {
|
||||
if root == "" {
|
||||
return false
|
||||
}
|
||||
info, err := os.Stat(filepath.Join(root, "mcp", "main.go"))
|
||||
return err == nil && !info.IsDir()
|
||||
}
|
||||
|
|
@ -1,252 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"time"
|
||||
)
|
||||
|
||||
// NotifyService handles desktop notifications.
|
||||
type NotifyService struct {
|
||||
enabled bool
|
||||
sound bool
|
||||
config *ConfigService
|
||||
}
|
||||
|
||||
// NewNotifyService creates a new NotifyService.
|
||||
func NewNotifyService(config *ConfigService) *NotifyService {
|
||||
return &NotifyService{
|
||||
enabled: true,
|
||||
sound: true,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (n *NotifyService) ServiceName() string {
|
||||
return "NotifyService"
|
||||
}
|
||||
|
||||
// SetEnabled enables or disables notifications.
|
||||
func (n *NotifyService) SetEnabled(enabled bool) {
|
||||
n.enabled = enabled
|
||||
}
|
||||
|
||||
// SetSound enables or disables notification sounds.
|
||||
func (n *NotifyService) SetSound(sound bool) {
|
||||
n.sound = sound
|
||||
}
|
||||
|
||||
// Notify sends a desktop notification.
|
||||
func (n *NotifyService) Notify(title, message string) error {
|
||||
if !n.enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
guard := getEthicsGuardWithRoot(context.Background(), n.getMarketplaceRoot())
|
||||
safeTitle := guard.SanitizeNotification(title)
|
||||
safeMessage := guard.SanitizeNotification(message)
|
||||
|
||||
log.Printf("Notification: %s - %s", safeTitle, safeMessage)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
var err error
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
err = n.notifyMacOS(ctx, safeTitle, safeMessage)
|
||||
case "linux":
|
||||
err = n.notifyLinux(ctx, safeTitle, safeMessage)
|
||||
case "windows":
|
||||
err = n.notifyWindows(ctx, safeTitle, safeMessage)
|
||||
default:
|
||||
err = fmt.Errorf("unsupported platform: %s", runtime.GOOS)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Printf("Notification error: %v", err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (n *NotifyService) getMarketplaceRoot() string {
|
||||
if n == nil || n.config == nil {
|
||||
return ""
|
||||
}
|
||||
return n.config.GetMarketplaceMCPRoot()
|
||||
}
|
||||
|
||||
// NotifyIssue sends a notification about a new issue.
|
||||
func (n *NotifyService) NotifyIssue(issue *Issue) error {
|
||||
title := "New Issue Available"
|
||||
message := fmt.Sprintf("%s: %s", issue.Repo, issue.Title)
|
||||
return n.Notify(title, message)
|
||||
}
|
||||
|
||||
// NotifyPRStatus sends a notification about a PR status change.
|
||||
func (n *NotifyService) NotifyPRStatus(repo string, prNumber int, status string) error {
|
||||
title := "PR Status Update"
|
||||
message := fmt.Sprintf("%s #%d: %s", repo, prNumber, status)
|
||||
return n.Notify(title, message)
|
||||
}
|
||||
|
||||
// notifyMacOS sends a notification on macOS using osascript.
|
||||
func (n *NotifyService) notifyMacOS(ctx context.Context, title, message string) error {
|
||||
script := fmt.Sprintf(`display notification "%s" with title "%s"`, escapeAppleScript(message), escapeAppleScript(title))
|
||||
if n.sound {
|
||||
script += ` sound name "Glass"`
|
||||
}
|
||||
cmd := exec.CommandContext(ctx, "osascript", "-e", script)
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// notifyLinux sends a notification on Linux using notify-send.
|
||||
func (n *NotifyService) notifyLinux(ctx context.Context, title, message string) error {
|
||||
args := []string{
|
||||
"--app-name=BugSETI",
|
||||
"--urgency=normal",
|
||||
title,
|
||||
message,
|
||||
}
|
||||
cmd := exec.CommandContext(ctx, "notify-send", args...)
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// notifyWindows sends a notification on Windows using PowerShell.
|
||||
func (n *NotifyService) notifyWindows(ctx context.Context, title, message string) error {
|
||||
title = escapePowerShellXML(title)
|
||||
message = escapePowerShellXML(message)
|
||||
|
||||
script := fmt.Sprintf(`
|
||||
[Windows.UI.Notifications.ToastNotificationManager, Windows.UI.Notifications, ContentType = WindowsRuntime] | Out-Null
|
||||
[Windows.Data.Xml.Dom.XmlDocument, Windows.Data.Xml.Dom.XmlDocument, ContentType = WindowsRuntime] | Out-Null
|
||||
|
||||
$template = @"
|
||||
<toast>
|
||||
<visual>
|
||||
<binding template="ToastText02">
|
||||
<text id="1">%s</text>
|
||||
<text id="2">%s</text>
|
||||
</binding>
|
||||
</visual>
|
||||
</toast>
|
||||
"@
|
||||
|
||||
$xml = New-Object Windows.Data.Xml.Dom.XmlDocument
|
||||
$xml.LoadXml($template)
|
||||
$toast = [Windows.UI.Notifications.ToastNotification]::new($xml)
|
||||
[Windows.UI.Notifications.ToastNotificationManager]::CreateToastNotifier("BugSETI").Show($toast)
|
||||
`, title, message)
|
||||
|
||||
cmd := exec.CommandContext(ctx, "powershell", "-Command", script)
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// NotifyWithAction sends a notification with an action button (platform-specific).
|
||||
func (n *NotifyService) NotifyWithAction(title, message, actionLabel string) error {
|
||||
if !n.enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
// macOS: Use terminal-notifier if available for actions
|
||||
if _, err := exec.LookPath("terminal-notifier"); err == nil {
|
||||
cmd := exec.CommandContext(ctx, "terminal-notifier",
|
||||
"-title", title,
|
||||
"-message", message,
|
||||
"-appIcon", "NSApplication",
|
||||
"-actions", actionLabel,
|
||||
"-group", "BugSETI")
|
||||
return cmd.Run()
|
||||
}
|
||||
return n.notifyMacOS(ctx, title, message)
|
||||
|
||||
case "linux":
|
||||
// Linux: Use notify-send with action
|
||||
args := []string{
|
||||
"--app-name=BugSETI",
|
||||
"--urgency=normal",
|
||||
"--action=open=" + actionLabel,
|
||||
title,
|
||||
message,
|
||||
}
|
||||
cmd := exec.CommandContext(ctx, "notify-send", args...)
|
||||
return cmd.Run()
|
||||
|
||||
default:
|
||||
return n.Notify(title, message)
|
||||
}
|
||||
}
|
||||
|
||||
// NotifyProgress sends a notification with a progress indicator.
|
||||
func (n *NotifyService) NotifyProgress(title, message string, progress int) error {
|
||||
if !n.enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
// Linux supports progress hints
|
||||
args := []string{
|
||||
"--app-name=BugSETI",
|
||||
"--hint=int:value:" + fmt.Sprintf("%d", progress),
|
||||
title,
|
||||
message,
|
||||
}
|
||||
cmd := exec.CommandContext(ctx, "notify-send", args...)
|
||||
return cmd.Run()
|
||||
|
||||
default:
|
||||
// Other platforms: include progress in message
|
||||
messageWithProgress := fmt.Sprintf("%s (%d%%)", message, progress)
|
||||
return n.Notify(title, messageWithProgress)
|
||||
}
|
||||
}
|
||||
|
||||
// PlaySound plays a notification sound.
|
||||
func (n *NotifyService) PlaySound() error {
|
||||
if !n.sound {
|
||||
return nil
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
cmd := exec.CommandContext(ctx, "afplay", "/System/Library/Sounds/Glass.aiff")
|
||||
return cmd.Run()
|
||||
|
||||
case "linux":
|
||||
// Try paplay (PulseAudio), then aplay (ALSA)
|
||||
if _, err := exec.LookPath("paplay"); err == nil {
|
||||
cmd := exec.CommandContext(ctx, "paplay", "/usr/share/sounds/freedesktop/stereo/complete.oga")
|
||||
return cmd.Run()
|
||||
}
|
||||
if _, err := exec.LookPath("aplay"); err == nil {
|
||||
cmd := exec.CommandContext(ctx, "aplay", "-q", "/usr/share/sounds/alsa/Front_Center.wav")
|
||||
return cmd.Run()
|
||||
}
|
||||
return nil
|
||||
|
||||
case "windows":
|
||||
script := `[console]::beep(800, 200)`
|
||||
cmd := exec.CommandContext(ctx, "powershell", "-Command", script)
|
||||
return cmd.Run()
|
||||
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
|
@ -1,314 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// IssueStatus represents the status of an issue in the queue.
|
||||
type IssueStatus string
|
||||
|
||||
const (
|
||||
StatusPending IssueStatus = "pending"
|
||||
StatusClaimed IssueStatus = "claimed"
|
||||
StatusInProgress IssueStatus = "in_progress"
|
||||
StatusCompleted IssueStatus = "completed"
|
||||
StatusSkipped IssueStatus = "skipped"
|
||||
)
|
||||
|
||||
// Issue represents a GitHub issue in the queue.
|
||||
type Issue struct {
|
||||
ID string `json:"id"`
|
||||
Number int `json:"number"`
|
||||
Repo string `json:"repo"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
URL string `json:"url"`
|
||||
Labels []string `json:"labels"`
|
||||
Author string `json:"author"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Priority int `json:"priority"`
|
||||
Status IssueStatus `json:"status"`
|
||||
ClaimedAt time.Time `json:"claimedAt,omitempty"`
|
||||
Context *IssueContext `json:"context,omitempty"`
|
||||
Comments []Comment `json:"comments,omitempty"`
|
||||
index int // For heap interface
|
||||
}
|
||||
|
||||
// Comment represents a comment on an issue.
|
||||
type Comment struct {
|
||||
Author string `json:"author"`
|
||||
Body string `json:"body"`
|
||||
}
|
||||
|
||||
// IssueContext contains AI-prepared context for an issue.
|
||||
type IssueContext struct {
|
||||
Summary string `json:"summary"`
|
||||
RelevantFiles []string `json:"relevantFiles"`
|
||||
SuggestedFix string `json:"suggestedFix"`
|
||||
RelatedIssues []string `json:"relatedIssues"`
|
||||
Complexity string `json:"complexity"`
|
||||
EstimatedTime string `json:"estimatedTime"`
|
||||
PreparedAt time.Time `json:"preparedAt"`
|
||||
}
|
||||
|
||||
// QueueService manages the priority queue of issues.
|
||||
type QueueService struct {
|
||||
config *ConfigService
|
||||
issues issueHeap
|
||||
seen map[string]bool
|
||||
current *Issue
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// issueHeap implements heap.Interface for priority queue.
|
||||
type issueHeap []*Issue
|
||||
|
||||
func (h issueHeap) Len() int { return len(h) }
|
||||
func (h issueHeap) Less(i, j int) bool { return h[i].Priority > h[j].Priority } // Higher priority first
|
||||
func (h issueHeap) Swap(i, j int) {
|
||||
h[i], h[j] = h[j], h[i]
|
||||
h[i].index = i
|
||||
h[j].index = j
|
||||
}
|
||||
|
||||
func (h *issueHeap) Push(x any) {
|
||||
n := len(*h)
|
||||
item := x.(*Issue)
|
||||
item.index = n
|
||||
*h = append(*h, item)
|
||||
}
|
||||
|
||||
func (h *issueHeap) Pop() any {
|
||||
old := *h
|
||||
n := len(old)
|
||||
item := old[n-1]
|
||||
old[n-1] = nil
|
||||
item.index = -1
|
||||
*h = old[0 : n-1]
|
||||
return item
|
||||
}
|
||||
|
||||
// NewQueueService creates a new QueueService.
|
||||
func NewQueueService(config *ConfigService) *QueueService {
|
||||
q := &QueueService{
|
||||
config: config,
|
||||
}
|
||||
|
||||
// Hold the lock for the entire initialization sequence so that all
|
||||
// shared state (issues, seen, current) is fully populated before
|
||||
// any concurrent caller can observe the service.
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
q.issues = make(issueHeap, 0)
|
||||
q.seen = make(map[string]bool)
|
||||
q.load() // Load persisted queue (overwrites issues/seen if file exists)
|
||||
return q
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (q *QueueService) ServiceName() string {
|
||||
return "QueueService"
|
||||
}
|
||||
|
||||
// Add adds issues to the queue, deduplicating by ID.
|
||||
func (q *QueueService) Add(issues []*Issue) int {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
added := 0
|
||||
for _, issue := range issues {
|
||||
if q.seen[issue.ID] {
|
||||
continue
|
||||
}
|
||||
q.seen[issue.ID] = true
|
||||
issue.Status = StatusPending
|
||||
heap.Push(&q.issues, issue)
|
||||
added++
|
||||
}
|
||||
|
||||
if added > 0 {
|
||||
q.save()
|
||||
}
|
||||
return added
|
||||
}
|
||||
|
||||
// Size returns the number of issues in the queue.
|
||||
func (q *QueueService) Size() int {
|
||||
q.mu.RLock()
|
||||
defer q.mu.RUnlock()
|
||||
return len(q.issues)
|
||||
}
|
||||
|
||||
// CurrentIssue returns the issue currently being worked on.
|
||||
func (q *QueueService) CurrentIssue() *Issue {
|
||||
q.mu.RLock()
|
||||
defer q.mu.RUnlock()
|
||||
return q.current
|
||||
}
|
||||
|
||||
// Next claims and returns the next issue from the queue.
|
||||
func (q *QueueService) Next() *Issue {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
if len(q.issues) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Pop the highest priority issue
|
||||
issue := heap.Pop(&q.issues).(*Issue)
|
||||
issue.Status = StatusClaimed
|
||||
issue.ClaimedAt = time.Now()
|
||||
q.current = issue
|
||||
q.save()
|
||||
return issue
|
||||
}
|
||||
|
||||
// Skip marks the current issue as skipped and moves to the next.
|
||||
func (q *QueueService) Skip() {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
if q.current != nil {
|
||||
q.current.Status = StatusSkipped
|
||||
q.current = nil
|
||||
q.save()
|
||||
}
|
||||
}
|
||||
|
||||
// Complete marks the current issue as completed.
|
||||
func (q *QueueService) Complete() {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
if q.current != nil {
|
||||
q.current.Status = StatusCompleted
|
||||
q.current = nil
|
||||
q.save()
|
||||
}
|
||||
}
|
||||
|
||||
// SetInProgress marks the current issue as in progress.
|
||||
func (q *QueueService) SetInProgress() {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
if q.current != nil {
|
||||
q.current.Status = StatusInProgress
|
||||
q.save()
|
||||
}
|
||||
}
|
||||
|
||||
// SetContext sets the AI-prepared context for the current issue.
|
||||
func (q *QueueService) SetContext(ctx *IssueContext) {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
if q.current != nil {
|
||||
q.current.Context = ctx
|
||||
q.save()
|
||||
}
|
||||
}
|
||||
|
||||
// GetPending returns all pending issues.
|
||||
func (q *QueueService) GetPending() []*Issue {
|
||||
q.mu.RLock()
|
||||
defer q.mu.RUnlock()
|
||||
|
||||
result := make([]*Issue, 0, len(q.issues))
|
||||
for _, issue := range q.issues {
|
||||
if issue.Status == StatusPending {
|
||||
result = append(result, issue)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Clear removes all issues from the queue.
|
||||
func (q *QueueService) Clear() {
|
||||
q.mu.Lock()
|
||||
defer q.mu.Unlock()
|
||||
|
||||
q.issues = make(issueHeap, 0)
|
||||
q.seen = make(map[string]bool)
|
||||
q.current = nil
|
||||
heap.Init(&q.issues)
|
||||
q.save()
|
||||
}
|
||||
|
||||
// queueState represents the persisted queue state.
|
||||
type queueState struct {
|
||||
Issues []*Issue `json:"issues"`
|
||||
Current *Issue `json:"current"`
|
||||
Seen []string `json:"seen"`
|
||||
}
|
||||
|
||||
// save persists the queue to disk. Must be called with q.mu held.
|
||||
func (q *QueueService) save() {
|
||||
dataDir := q.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "queue.json")
|
||||
|
||||
seen := make([]string, 0, len(q.seen))
|
||||
for id := range q.seen {
|
||||
seen = append(seen, id)
|
||||
}
|
||||
|
||||
state := queueState{
|
||||
Issues: []*Issue(q.issues),
|
||||
Current: q.current,
|
||||
Seen: seen,
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(state, "", " ")
|
||||
if err != nil {
|
||||
log.Printf("Failed to marshal queue: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := os.WriteFile(path, data, 0644); err != nil {
|
||||
log.Printf("Failed to save queue: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// load restores the queue from disk. Must be called with q.mu held.
|
||||
func (q *QueueService) load() {
|
||||
dataDir := q.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "queue.json")
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
log.Printf("Failed to read queue: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var state queueState
|
||||
if err := json.Unmarshal(data, &state); err != nil {
|
||||
log.Printf("Failed to unmarshal queue: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
q.issues = state.Issues
|
||||
heap.Init(&q.issues)
|
||||
q.current = state.Current
|
||||
q.seen = make(map[string]bool)
|
||||
for _, id := range state.Seen {
|
||||
q.seen[id] = true
|
||||
}
|
||||
}
|
||||
|
|
@ -1,383 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SeederService prepares context for issues using the seed-agent-developer skill.
|
||||
type SeederService struct {
|
||||
mu sync.Mutex
|
||||
config *ConfigService
|
||||
forgeURL string
|
||||
forgeToken string
|
||||
}
|
||||
|
||||
// NewSeederService creates a new SeederService.
|
||||
func NewSeederService(config *ConfigService, forgeURL, forgeToken string) *SeederService {
|
||||
return &SeederService{
|
||||
config: config,
|
||||
forgeURL: forgeURL,
|
||||
forgeToken: forgeToken,
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (s *SeederService) ServiceName() string {
|
||||
return "SeederService"
|
||||
}
|
||||
|
||||
// SeedIssue prepares context for an issue by calling the seed-agent-developer skill.
|
||||
func (s *SeederService) SeedIssue(issue *Issue) (*IssueContext, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if issue == nil {
|
||||
return nil, fmt.Errorf("issue is nil")
|
||||
}
|
||||
|
||||
// Create a temporary workspace for the issue
|
||||
workDir, err := s.prepareWorkspace(issue)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to prepare workspace: %w", err)
|
||||
}
|
||||
|
||||
// Try to use the seed-agent-developer skill via plugin system
|
||||
ctx, err := s.runSeedSkill(issue, workDir)
|
||||
if err != nil {
|
||||
log.Printf("Seed skill failed, using fallback: %v", err)
|
||||
// Fallback to basic context preparation
|
||||
guard := getEthicsGuardWithRoot(context.Background(), s.config.GetMarketplaceMCPRoot())
|
||||
ctx = s.prepareBasicContext(issue, guard)
|
||||
}
|
||||
|
||||
ctx.PreparedAt = time.Now()
|
||||
return ctx, nil
|
||||
}
|
||||
|
||||
// prepareWorkspace creates a temporary workspace and clones the repo.
|
||||
func (s *SeederService) prepareWorkspace(issue *Issue) (string, error) {
|
||||
// Create workspace directory
|
||||
baseDir := s.config.GetWorkspaceDir()
|
||||
if baseDir == "" {
|
||||
baseDir = filepath.Join(os.TempDir(), "bugseti")
|
||||
}
|
||||
|
||||
// Create issue-specific directory
|
||||
workDir := filepath.Join(baseDir, sanitizeRepoName(issue.Repo), fmt.Sprintf("issue-%d", issue.Number))
|
||||
if err := os.MkdirAll(workDir, 0755); err != nil {
|
||||
return "", fmt.Errorf("failed to create workspace: %w", err)
|
||||
}
|
||||
|
||||
// Check if repo already cloned
|
||||
if _, err := os.Stat(filepath.Join(workDir, ".git")); os.IsNotExist(err) {
|
||||
// Clone the repository
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
cloneURL := fmt.Sprintf("%s/%s.git", strings.TrimRight(s.forgeURL, "/"), issue.Repo)
|
||||
cmd := exec.CommandContext(ctx, "git", "clone", "--depth=1", cloneURL, workDir)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("GIT_ASKPASS=echo"),
|
||||
fmt.Sprintf("GIT_TERMINAL_PROMPT=0"),
|
||||
)
|
||||
if s.forgeToken != "" {
|
||||
// Use token auth via URL for HTTPS clones
|
||||
cloneURL = fmt.Sprintf("%s/%s.git", strings.TrimRight(s.forgeURL, "/"), issue.Repo)
|
||||
cloneURL = strings.Replace(cloneURL, "://", fmt.Sprintf("://bugseti:%s@", s.forgeToken), 1)
|
||||
cmd = exec.CommandContext(ctx, "git", "clone", "--depth=1", cloneURL, workDir)
|
||||
}
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", fmt.Errorf("failed to clone repo: %s: %w", stderr.String(), err)
|
||||
}
|
||||
}
|
||||
|
||||
return workDir, nil
|
||||
}
|
||||
|
||||
// runSeedSkill executes the seed-agent-developer skill to prepare context.
|
||||
func (s *SeederService) runSeedSkill(issue *Issue, workDir string) (*IssueContext, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
mcpCtx, mcpCancel := context.WithTimeout(ctx, 20*time.Second)
|
||||
defer mcpCancel()
|
||||
|
||||
marketplace, err := newMarketplaceClient(mcpCtx, s.config.GetMarketplaceMCPRoot())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer marketplace.Close()
|
||||
|
||||
guard := guardFromMarketplace(mcpCtx, marketplace)
|
||||
|
||||
scriptPath, err := findSeedSkillScript(mcpCtx, marketplace)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Run the analyze-issue script
|
||||
cmd := exec.CommandContext(ctx, "bash", scriptPath)
|
||||
cmd.Dir = workDir
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("ISSUE_NUMBER=%d", issue.Number),
|
||||
fmt.Sprintf("ISSUE_REPO=%s", guard.SanitizeEnv(issue.Repo)),
|
||||
fmt.Sprintf("ISSUE_TITLE=%s", guard.SanitizeEnv(issue.Title)),
|
||||
fmt.Sprintf("ISSUE_URL=%s", guard.SanitizeEnv(issue.URL)),
|
||||
)
|
||||
|
||||
var stdout, stderr bytes.Buffer
|
||||
cmd.Stdout = &stdout
|
||||
cmd.Stderr = &stderr
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, fmt.Errorf("seed skill failed: %s: %w", stderr.String(), err)
|
||||
}
|
||||
|
||||
// Parse the output as JSON
|
||||
var result struct {
|
||||
Summary string `json:"summary"`
|
||||
RelevantFiles []string `json:"relevant_files"`
|
||||
SuggestedFix string `json:"suggested_fix"`
|
||||
RelatedIssues []string `json:"related_issues"`
|
||||
Complexity string `json:"complexity"`
|
||||
EstimatedTime string `json:"estimated_time"`
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(stdout.Bytes(), &result); err != nil {
|
||||
// If not JSON, treat as plain text summary
|
||||
return sanitizeIssueContext(&IssueContext{
|
||||
Summary: stdout.String(),
|
||||
Complexity: "unknown",
|
||||
}, guard), nil
|
||||
}
|
||||
|
||||
return sanitizeIssueContext(&IssueContext{
|
||||
Summary: result.Summary,
|
||||
RelevantFiles: result.RelevantFiles,
|
||||
SuggestedFix: result.SuggestedFix,
|
||||
RelatedIssues: result.RelatedIssues,
|
||||
Complexity: result.Complexity,
|
||||
EstimatedTime: result.EstimatedTime,
|
||||
}, guard), nil
|
||||
}
|
||||
|
||||
// prepareBasicContext creates a basic context without the seed skill.
|
||||
func (s *SeederService) prepareBasicContext(issue *Issue, guard *EthicsGuard) *IssueContext {
|
||||
// Extract potential file references from issue body
|
||||
files := extractFileReferences(issue.Body)
|
||||
|
||||
// Estimate complexity based on labels and body length
|
||||
complexity := estimateComplexity(issue)
|
||||
|
||||
return sanitizeIssueContext(&IssueContext{
|
||||
Summary: fmt.Sprintf("Issue #%d in %s: %s", issue.Number, issue.Repo, issue.Title),
|
||||
RelevantFiles: files,
|
||||
Complexity: complexity,
|
||||
EstimatedTime: estimateTime(complexity),
|
||||
}, guard)
|
||||
}
|
||||
|
||||
// sanitizeRepoName converts owner/repo to a safe directory name.
|
||||
func sanitizeRepoName(repo string) string {
|
||||
return strings.ReplaceAll(repo, "/", "-")
|
||||
}
|
||||
|
||||
// extractFileReferences finds file paths mentioned in text.
|
||||
func extractFileReferences(text string) []string {
|
||||
var files []string
|
||||
seen := make(map[string]bool)
|
||||
|
||||
// Common file patterns
|
||||
patterns := []string{
|
||||
`.go`, `.js`, `.ts`, `.py`, `.rs`, `.java`, `.cpp`, `.c`, `.h`,
|
||||
`.json`, `.yaml`, `.yml`, `.toml`, `.xml`, `.md`,
|
||||
}
|
||||
|
||||
words := strings.Fields(text)
|
||||
for _, word := range words {
|
||||
// Clean up the word
|
||||
word = strings.Trim(word, "`,\"'()[]{}:")
|
||||
|
||||
// Check if it looks like a file path
|
||||
for _, ext := range patterns {
|
||||
if strings.HasSuffix(word, ext) && !seen[word] {
|
||||
files = append(files, word)
|
||||
seen[word] = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return files
|
||||
}
|
||||
|
||||
// estimateComplexity guesses issue complexity from content.
|
||||
func estimateComplexity(issue *Issue) string {
|
||||
bodyLen := len(issue.Body)
|
||||
labelScore := 0
|
||||
|
||||
for _, label := range issue.Labels {
|
||||
lower := strings.ToLower(label)
|
||||
switch {
|
||||
case strings.Contains(lower, "good first issue"), strings.Contains(lower, "beginner"):
|
||||
labelScore -= 2
|
||||
case strings.Contains(lower, "easy"):
|
||||
labelScore -= 1
|
||||
case strings.Contains(lower, "complex"), strings.Contains(lower, "hard"):
|
||||
labelScore += 2
|
||||
case strings.Contains(lower, "refactor"):
|
||||
labelScore += 1
|
||||
}
|
||||
}
|
||||
|
||||
// Combine body length and label score
|
||||
score := labelScore
|
||||
if bodyLen > 2000 {
|
||||
score += 2
|
||||
} else if bodyLen > 500 {
|
||||
score += 1
|
||||
}
|
||||
|
||||
switch {
|
||||
case score <= -1:
|
||||
return "easy"
|
||||
case score <= 1:
|
||||
return "medium"
|
||||
default:
|
||||
return "hard"
|
||||
}
|
||||
}
|
||||
|
||||
// estimateTime suggests time based on complexity.
|
||||
func estimateTime(complexity string) string {
|
||||
switch complexity {
|
||||
case "easy":
|
||||
return "15-30 minutes"
|
||||
case "medium":
|
||||
return "1-2 hours"
|
||||
case "hard":
|
||||
return "2-4 hours"
|
||||
default:
|
||||
return "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
const seedSkillName = "seed-agent-developer"
|
||||
|
||||
func findSeedSkillScript(ctx context.Context, marketplace marketplaceClient) (string, error) {
|
||||
if marketplace == nil {
|
||||
return "", fmt.Errorf("marketplace client is nil")
|
||||
}
|
||||
|
||||
plugins, err := marketplace.ListMarketplace(ctx)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for _, plugin := range plugins {
|
||||
info, err := marketplace.PluginInfo(ctx, plugin.Name)
|
||||
if err != nil || info == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if !containsSkill(info.Skills, seedSkillName) {
|
||||
continue
|
||||
}
|
||||
|
||||
scriptPath, err := safeJoinUnder(info.Path, "skills", seedSkillName, "scripts", "analyze-issue.sh")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if stat, err := os.Stat(scriptPath); err == nil && !stat.IsDir() {
|
||||
return scriptPath, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("seed-agent-developer skill not found in marketplace")
|
||||
}
|
||||
|
||||
func containsSkill(skills []string, name string) bool {
|
||||
for _, skill := range skills {
|
||||
if skill == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func safeJoinUnder(base string, elems ...string) (string, error) {
|
||||
if base == "" {
|
||||
return "", fmt.Errorf("base path is empty")
|
||||
}
|
||||
baseAbs, err := filepath.Abs(base)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to resolve base path: %w", err)
|
||||
}
|
||||
|
||||
joined := filepath.Join(append([]string{baseAbs}, elems...)...)
|
||||
rel, err := filepath.Rel(baseAbs, joined)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to resolve relative path: %w", err)
|
||||
}
|
||||
if strings.HasPrefix(rel, "..") {
|
||||
return "", fmt.Errorf("resolved path escapes base: %s", rel)
|
||||
}
|
||||
|
||||
return joined, nil
|
||||
}
|
||||
|
||||
func sanitizeIssueContext(ctx *IssueContext, guard *EthicsGuard) *IssueContext {
|
||||
if ctx == nil {
|
||||
return nil
|
||||
}
|
||||
if guard == nil {
|
||||
guard = &EthicsGuard{}
|
||||
}
|
||||
|
||||
ctx.Summary = guard.SanitizeSummary(ctx.Summary)
|
||||
ctx.SuggestedFix = guard.SanitizeSummary(ctx.SuggestedFix)
|
||||
ctx.Complexity = guard.SanitizeTitle(ctx.Complexity)
|
||||
ctx.EstimatedTime = guard.SanitizeTitle(ctx.EstimatedTime)
|
||||
ctx.RelatedIssues = guard.SanitizeList(ctx.RelatedIssues, maxTitleRunes)
|
||||
ctx.RelevantFiles = guard.SanitizeFiles(ctx.RelevantFiles)
|
||||
return ctx
|
||||
}
|
||||
|
||||
// GetWorkspaceDir returns the workspace directory for an issue.
|
||||
func (s *SeederService) GetWorkspaceDir(issue *Issue) string {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
return s.getWorkspaceDir(issue)
|
||||
}
|
||||
|
||||
// getWorkspaceDir is the lock-free implementation; caller must hold s.mu.
|
||||
func (s *SeederService) getWorkspaceDir(issue *Issue) string {
|
||||
baseDir := s.config.GetWorkspaceDir()
|
||||
if baseDir == "" {
|
||||
baseDir = filepath.Join(os.TempDir(), "bugseti")
|
||||
}
|
||||
return filepath.Join(baseDir, sanitizeRepoName(issue.Repo), fmt.Sprintf("issue-%d", issue.Number))
|
||||
}
|
||||
|
||||
// CleanupWorkspace removes the workspace for an issue.
|
||||
func (s *SeederService) CleanupWorkspace(issue *Issue) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
workDir := s.getWorkspaceDir(issue)
|
||||
return os.RemoveAll(workDir)
|
||||
}
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type fakeMarketplaceClient struct {
|
||||
plugins []MarketplacePlugin
|
||||
infos map[string]*PluginInfo
|
||||
listErr error
|
||||
infoErr map[string]error
|
||||
}
|
||||
|
||||
func (f *fakeMarketplaceClient) ListMarketplace(ctx context.Context) ([]MarketplacePlugin, error) {
|
||||
if f.listErr != nil {
|
||||
return nil, f.listErr
|
||||
}
|
||||
return f.plugins, nil
|
||||
}
|
||||
|
||||
func (f *fakeMarketplaceClient) PluginInfo(ctx context.Context, name string) (*PluginInfo, error) {
|
||||
if err, ok := f.infoErr[name]; ok {
|
||||
return nil, err
|
||||
}
|
||||
info, ok := f.infos[name]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("plugin not found")
|
||||
}
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func (f *fakeMarketplaceClient) EthicsCheck(ctx context.Context) (*EthicsContext, error) {
|
||||
return nil, fmt.Errorf("not implemented")
|
||||
}
|
||||
|
||||
func (f *fakeMarketplaceClient) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestFindSeedSkillScript_Good(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
scriptPath := filepath.Join(root, "skills", seedSkillName, "scripts", "analyze-issue.sh")
|
||||
if err := os.MkdirAll(filepath.Dir(scriptPath), 0755); err != nil {
|
||||
t.Fatalf("failed to create script directory: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(scriptPath, []byte("#!/bin/bash\n"), 0755); err != nil {
|
||||
t.Fatalf("failed to write script: %v", err)
|
||||
}
|
||||
|
||||
plugin := MarketplacePlugin{Name: "seed-plugin"}
|
||||
client := &fakeMarketplaceClient{
|
||||
plugins: []MarketplacePlugin{plugin},
|
||||
infos: map[string]*PluginInfo{
|
||||
plugin.Name: {
|
||||
Plugin: plugin,
|
||||
Path: root,
|
||||
Skills: []string{seedSkillName},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
found, err := findSeedSkillScript(context.Background(), client)
|
||||
if err != nil {
|
||||
t.Fatalf("expected script path, got error: %v", err)
|
||||
}
|
||||
if found != scriptPath {
|
||||
t.Fatalf("expected %q, got %q", scriptPath, found)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindSeedSkillScript_Bad(t *testing.T) {
|
||||
plugin := MarketplacePlugin{Name: "empty-plugin"}
|
||||
client := &fakeMarketplaceClient{
|
||||
plugins: []MarketplacePlugin{plugin},
|
||||
infos: map[string]*PluginInfo{
|
||||
plugin.Name: {
|
||||
Plugin: plugin,
|
||||
Path: t.TempDir(),
|
||||
Skills: []string{"not-the-skill"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if _, err := findSeedSkillScript(context.Background(), client); err == nil {
|
||||
t.Fatal("expected error when skill is missing")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSafeJoinUnder_Ugly(t *testing.T) {
|
||||
if _, err := safeJoinUnder("", "skills"); err == nil {
|
||||
t.Fatal("expected error for empty base path")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,359 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// StatsService tracks user contribution statistics.
|
||||
type StatsService struct {
|
||||
config *ConfigService
|
||||
stats *Stats
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// Stats contains all tracked statistics.
|
||||
type Stats struct {
|
||||
// Issue stats
|
||||
IssuesAttempted int `json:"issuesAttempted"`
|
||||
IssuesCompleted int `json:"issuesCompleted"`
|
||||
IssuesSkipped int `json:"issuesSkipped"`
|
||||
|
||||
// PR stats
|
||||
PRsSubmitted int `json:"prsSubmitted"`
|
||||
PRsMerged int `json:"prsMerged"`
|
||||
PRsRejected int `json:"prsRejected"`
|
||||
|
||||
// Repository stats
|
||||
ReposContributed map[string]*RepoStats `json:"reposContributed"`
|
||||
|
||||
// Streaks
|
||||
CurrentStreak int `json:"currentStreak"`
|
||||
LongestStreak int `json:"longestStreak"`
|
||||
LastActivity time.Time `json:"lastActivity"`
|
||||
|
||||
// Time tracking
|
||||
TotalTimeSpent time.Duration `json:"totalTimeSpent"`
|
||||
AverageTimePerPR time.Duration `json:"averageTimePerPR"`
|
||||
|
||||
// Activity history (last 30 days)
|
||||
DailyActivity map[string]*DayStats `json:"dailyActivity"`
|
||||
}
|
||||
|
||||
// RepoStats contains statistics for a single repository.
|
||||
type RepoStats struct {
|
||||
Name string `json:"name"`
|
||||
IssuesFixed int `json:"issuesFixed"`
|
||||
PRsSubmitted int `json:"prsSubmitted"`
|
||||
PRsMerged int `json:"prsMerged"`
|
||||
FirstContrib time.Time `json:"firstContrib"`
|
||||
LastContrib time.Time `json:"lastContrib"`
|
||||
}
|
||||
|
||||
// DayStats contains statistics for a single day.
|
||||
type DayStats struct {
|
||||
Date string `json:"date"`
|
||||
IssuesWorked int `json:"issuesWorked"`
|
||||
PRsSubmitted int `json:"prsSubmitted"`
|
||||
TimeSpent int `json:"timeSpentMinutes"`
|
||||
}
|
||||
|
||||
// NewStatsService creates a new StatsService.
|
||||
func NewStatsService(config *ConfigService) *StatsService {
|
||||
s := &StatsService{
|
||||
config: config,
|
||||
stats: &Stats{
|
||||
ReposContributed: make(map[string]*RepoStats),
|
||||
DailyActivity: make(map[string]*DayStats),
|
||||
},
|
||||
}
|
||||
s.load()
|
||||
return s
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (s *StatsService) ServiceName() string {
|
||||
return "StatsService"
|
||||
}
|
||||
|
||||
// GetStats returns a copy of the current statistics.
|
||||
func (s *StatsService) GetStats() Stats {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return *s.stats
|
||||
}
|
||||
|
||||
// RecordIssueAttempted records that an issue was started.
|
||||
func (s *StatsService) RecordIssueAttempted(repo string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.IssuesAttempted++
|
||||
s.ensureRepo(repo)
|
||||
s.updateStreak()
|
||||
s.updateDailyActivity("issue")
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordIssueCompleted records that an issue was completed.
|
||||
func (s *StatsService) RecordIssueCompleted(repo string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.IssuesCompleted++
|
||||
if rs, ok := s.stats.ReposContributed[repo]; ok {
|
||||
rs.IssuesFixed++
|
||||
rs.LastContrib = time.Now()
|
||||
}
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordIssueSkipped records that an issue was skipped.
|
||||
func (s *StatsService) RecordIssueSkipped() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.IssuesSkipped++
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordPRSubmitted records that a PR was submitted.
|
||||
func (s *StatsService) RecordPRSubmitted(repo string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.PRsSubmitted++
|
||||
if rs, ok := s.stats.ReposContributed[repo]; ok {
|
||||
rs.PRsSubmitted++
|
||||
rs.LastContrib = time.Now()
|
||||
}
|
||||
s.updateDailyActivity("pr")
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordPRMerged records that a PR was merged.
|
||||
func (s *StatsService) RecordPRMerged(repo string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.PRsMerged++
|
||||
if rs, ok := s.stats.ReposContributed[repo]; ok {
|
||||
rs.PRsMerged++
|
||||
}
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordPRRejected records that a PR was rejected.
|
||||
func (s *StatsService) RecordPRRejected() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.PRsRejected++
|
||||
s.save()
|
||||
}
|
||||
|
||||
// RecordTimeSpent adds time spent on an issue.
|
||||
func (s *StatsService) RecordTimeSpent(duration time.Duration) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats.TotalTimeSpent += duration
|
||||
|
||||
// Recalculate average
|
||||
if s.stats.PRsSubmitted > 0 {
|
||||
s.stats.AverageTimePerPR = s.stats.TotalTimeSpent / time.Duration(s.stats.PRsSubmitted)
|
||||
}
|
||||
|
||||
// Update daily activity
|
||||
today := time.Now().Format("2006-01-02")
|
||||
if day, ok := s.stats.DailyActivity[today]; ok {
|
||||
day.TimeSpent += int(duration.Minutes())
|
||||
}
|
||||
|
||||
s.save()
|
||||
}
|
||||
|
||||
// GetRepoStats returns statistics for a specific repository.
|
||||
func (s *StatsService) GetRepoStats(repo string) *RepoStats {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.stats.ReposContributed[repo]
|
||||
}
|
||||
|
||||
// GetTopRepos returns the top N repositories by contributions.
|
||||
func (s *StatsService) GetTopRepos(n int) []*RepoStats {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
repos := make([]*RepoStats, 0, len(s.stats.ReposContributed))
|
||||
for _, rs := range s.stats.ReposContributed {
|
||||
repos = append(repos, rs)
|
||||
}
|
||||
|
||||
// Sort by PRs merged (descending)
|
||||
for i := 0; i < len(repos)-1; i++ {
|
||||
for j := i + 1; j < len(repos); j++ {
|
||||
if repos[j].PRsMerged > repos[i].PRsMerged {
|
||||
repos[i], repos[j] = repos[j], repos[i]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if n > len(repos) {
|
||||
n = len(repos)
|
||||
}
|
||||
return repos[:n]
|
||||
}
|
||||
|
||||
// GetActivityHistory returns the activity for the last N days.
|
||||
func (s *StatsService) GetActivityHistory(days int) []*DayStats {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
result := make([]*DayStats, 0, days)
|
||||
now := time.Now()
|
||||
|
||||
for i := 0; i < days; i++ {
|
||||
date := now.AddDate(0, 0, -i).Format("2006-01-02")
|
||||
if day, ok := s.stats.DailyActivity[date]; ok {
|
||||
result = append(result, day)
|
||||
} else {
|
||||
result = append(result, &DayStats{Date: date})
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// ensureRepo creates a repo stats entry if it doesn't exist.
|
||||
func (s *StatsService) ensureRepo(repo string) {
|
||||
if _, ok := s.stats.ReposContributed[repo]; !ok {
|
||||
s.stats.ReposContributed[repo] = &RepoStats{
|
||||
Name: repo,
|
||||
FirstContrib: time.Now(),
|
||||
LastContrib: time.Now(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// updateStreak updates the contribution streak.
|
||||
func (s *StatsService) updateStreak() {
|
||||
now := time.Now()
|
||||
lastActivity := s.stats.LastActivity
|
||||
|
||||
if lastActivity.IsZero() {
|
||||
s.stats.CurrentStreak = 1
|
||||
} else {
|
||||
daysSince := int(now.Sub(lastActivity).Hours() / 24)
|
||||
if daysSince <= 1 {
|
||||
// Same day or next day
|
||||
if daysSince == 1 || now.Day() != lastActivity.Day() {
|
||||
s.stats.CurrentStreak++
|
||||
}
|
||||
} else {
|
||||
// Streak broken
|
||||
s.stats.CurrentStreak = 1
|
||||
}
|
||||
}
|
||||
|
||||
if s.stats.CurrentStreak > s.stats.LongestStreak {
|
||||
s.stats.LongestStreak = s.stats.CurrentStreak
|
||||
}
|
||||
|
||||
s.stats.LastActivity = now
|
||||
}
|
||||
|
||||
// updateDailyActivity updates today's activity.
|
||||
func (s *StatsService) updateDailyActivity(activityType string) {
|
||||
today := time.Now().Format("2006-01-02")
|
||||
|
||||
if _, ok := s.stats.DailyActivity[today]; !ok {
|
||||
s.stats.DailyActivity[today] = &DayStats{Date: today}
|
||||
}
|
||||
|
||||
day := s.stats.DailyActivity[today]
|
||||
switch activityType {
|
||||
case "issue":
|
||||
day.IssuesWorked++
|
||||
case "pr":
|
||||
day.PRsSubmitted++
|
||||
}
|
||||
|
||||
// Clean up old entries (keep last 90 days)
|
||||
cutoff := time.Now().AddDate(0, 0, -90).Format("2006-01-02")
|
||||
for date := range s.stats.DailyActivity {
|
||||
if date < cutoff {
|
||||
delete(s.stats.DailyActivity, date)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// save persists stats to disk.
|
||||
func (s *StatsService) save() {
|
||||
dataDir := s.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "stats.json")
|
||||
data, err := json.MarshalIndent(s.stats, "", " ")
|
||||
if err != nil {
|
||||
log.Printf("Failed to marshal stats: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := os.WriteFile(path, data, 0644); err != nil {
|
||||
log.Printf("Failed to save stats: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// load restores stats from disk.
|
||||
func (s *StatsService) load() {
|
||||
dataDir := s.config.GetDataDir()
|
||||
if dataDir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(dataDir, "stats.json")
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
log.Printf("Failed to read stats: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var stats Stats
|
||||
if err := json.Unmarshal(data, &stats); err != nil {
|
||||
log.Printf("Failed to unmarshal stats: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure maps are initialized
|
||||
if stats.ReposContributed == nil {
|
||||
stats.ReposContributed = make(map[string]*RepoStats)
|
||||
}
|
||||
if stats.DailyActivity == nil {
|
||||
stats.DailyActivity = make(map[string]*DayStats)
|
||||
}
|
||||
|
||||
s.stats = &stats
|
||||
}
|
||||
|
||||
// Reset clears all statistics.
|
||||
func (s *StatsService) Reset() error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.stats = &Stats{
|
||||
ReposContributed: make(map[string]*RepoStats),
|
||||
DailyActivity: make(map[string]*DayStats),
|
||||
}
|
||||
s.save()
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,366 +0,0 @@
|
|||
// Package bugseti provides services for the BugSETI distributed bug fixing application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
forgejo "codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/forge"
|
||||
)
|
||||
|
||||
// SubmitService handles the PR submission flow.
|
||||
type SubmitService struct {
|
||||
config *ConfigService
|
||||
notify *NotifyService
|
||||
stats *StatsService
|
||||
forge *forge.Client
|
||||
}
|
||||
|
||||
// NewSubmitService creates a new SubmitService.
|
||||
func NewSubmitService(config *ConfigService, notify *NotifyService, stats *StatsService, forgeClient *forge.Client) *SubmitService {
|
||||
return &SubmitService{
|
||||
config: config,
|
||||
notify: notify,
|
||||
stats: stats,
|
||||
forge: forgeClient,
|
||||
}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (s *SubmitService) ServiceName() string {
|
||||
return "SubmitService"
|
||||
}
|
||||
|
||||
// PRSubmission contains the data for a pull request submission.
|
||||
type PRSubmission struct {
|
||||
Issue *Issue `json:"issue"`
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
Branch string `json:"branch"`
|
||||
CommitMsg string `json:"commitMsg"`
|
||||
Files []string `json:"files"`
|
||||
WorkDir string `json:"workDir"`
|
||||
}
|
||||
|
||||
// PRResult contains the result of a PR submission.
|
||||
type PRResult struct {
|
||||
Success bool `json:"success"`
|
||||
PRURL string `json:"prUrl,omitempty"`
|
||||
PRNumber int `json:"prNumber,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
ForkOwner string `json:"forkOwner,omitempty"`
|
||||
}
|
||||
|
||||
// Submit creates a pull request for the given issue.
|
||||
// Flow: Fork -> Branch -> Commit -> Push -> PR
|
||||
func (s *SubmitService) Submit(submission *PRSubmission) (*PRResult, error) {
|
||||
if submission == nil || submission.Issue == nil {
|
||||
return nil, fmt.Errorf("invalid submission")
|
||||
}
|
||||
|
||||
issue := submission.Issue
|
||||
workDir := submission.WorkDir
|
||||
if workDir == "" {
|
||||
return nil, fmt.Errorf("work directory not specified")
|
||||
}
|
||||
|
||||
guard := getEthicsGuardWithRoot(context.Background(), s.config.GetMarketplaceMCPRoot())
|
||||
issueTitle := guard.SanitizeTitle(issue.Title)
|
||||
|
||||
owner, repoName, err := splitRepo(issue.Repo)
|
||||
if err != nil {
|
||||
return &PRResult{Success: false, Error: err.Error()}, err
|
||||
}
|
||||
|
||||
// Step 1: Ensure we have a fork
|
||||
forkOwner, err := s.ensureFork(owner, repoName)
|
||||
if err != nil {
|
||||
return &PRResult{Success: false, Error: fmt.Sprintf("fork failed: %v", err)}, err
|
||||
}
|
||||
|
||||
// Step 2: Create branch
|
||||
branch := submission.Branch
|
||||
if branch == "" {
|
||||
branch = fmt.Sprintf("bugseti/issue-%d", issue.Number)
|
||||
}
|
||||
if err := s.createBranch(workDir, branch); err != nil {
|
||||
return &PRResult{Success: false, Error: fmt.Sprintf("branch creation failed: %v", err)}, err
|
||||
}
|
||||
|
||||
// Step 3: Stage and commit changes
|
||||
commitMsg := submission.CommitMsg
|
||||
if commitMsg == "" {
|
||||
commitMsg = fmt.Sprintf("fix: resolve issue #%d\n\n%s\n\nFixes #%d", issue.Number, issueTitle, issue.Number)
|
||||
} else {
|
||||
commitMsg = guard.SanitizeBody(commitMsg)
|
||||
}
|
||||
if err := s.commitChanges(workDir, submission.Files, commitMsg); err != nil {
|
||||
return &PRResult{Success: false, Error: fmt.Sprintf("commit failed: %v", err)}, err
|
||||
}
|
||||
|
||||
// Step 4: Push to fork
|
||||
if err := s.pushToFork(workDir, forkOwner, repoName, branch); err != nil {
|
||||
return &PRResult{Success: false, Error: fmt.Sprintf("push failed: %v", err)}, err
|
||||
}
|
||||
|
||||
// Step 5: Create PR
|
||||
prTitle := submission.Title
|
||||
if prTitle == "" {
|
||||
prTitle = fmt.Sprintf("Fix #%d: %s", issue.Number, issueTitle)
|
||||
} else {
|
||||
prTitle = guard.SanitizeTitle(prTitle)
|
||||
}
|
||||
prBody := submission.Body
|
||||
if prBody == "" {
|
||||
prBody = s.generatePRBody(issue)
|
||||
}
|
||||
prBody = guard.SanitizeBody(prBody)
|
||||
|
||||
prURL, prNumber, err := s.createPR(owner, repoName, forkOwner, branch, prTitle, prBody)
|
||||
if err != nil {
|
||||
return &PRResult{Success: false, Error: fmt.Sprintf("PR creation failed: %v", err)}, err
|
||||
}
|
||||
|
||||
// Update stats
|
||||
s.stats.RecordPRSubmitted(issue.Repo)
|
||||
|
||||
// Notify user
|
||||
s.notify.Notify("BugSETI", fmt.Sprintf("PR #%d submitted for issue #%d", prNumber, issue.Number))
|
||||
|
||||
return &PRResult{
|
||||
Success: true,
|
||||
PRURL: prURL,
|
||||
PRNumber: prNumber,
|
||||
ForkOwner: forkOwner,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ensureFork ensures a fork exists for the repo, returns the fork owner's username.
|
||||
func (s *SubmitService) ensureFork(owner, repo string) (string, error) {
|
||||
// Get current user
|
||||
user, err := s.forge.GetCurrentUser()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get current user: %w", err)
|
||||
}
|
||||
username := user.UserName
|
||||
|
||||
// Check if fork already exists
|
||||
_, err = s.forge.GetRepo(username, repo)
|
||||
if err == nil {
|
||||
return username, nil
|
||||
}
|
||||
|
||||
// Fork doesn't exist, create it
|
||||
log.Printf("Creating fork of %s/%s...", owner, repo)
|
||||
_, err = s.forge.ForkRepo(owner, repo, "")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create fork: %w", err)
|
||||
}
|
||||
|
||||
// Wait for Forgejo to process the fork
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
return username, nil
|
||||
}
|
||||
|
||||
// createBranch creates a new branch in the repository.
|
||||
func (s *SubmitService) createBranch(workDir, branch string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Fetch latest from upstream
|
||||
cmd := exec.CommandContext(ctx, "git", "fetch", "origin")
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
log.Printf("WARNING: git fetch origin failed in %s: %v (proceeding with potentially stale data)", workDir, err)
|
||||
}
|
||||
|
||||
// Create and checkout new branch
|
||||
cmd = exec.CommandContext(ctx, "git", "checkout", "-b", branch)
|
||||
cmd.Dir = workDir
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
// Branch might already exist, try to checkout
|
||||
cmd = exec.CommandContext(ctx, "git", "checkout", branch)
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to create/checkout branch: %s: %w", stderr.String(), err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// commitChanges stages and commits the specified files.
|
||||
func (s *SubmitService) commitChanges(workDir string, files []string, message string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Stage files
|
||||
if len(files) == 0 {
|
||||
// Stage all changes
|
||||
cmd := exec.CommandContext(ctx, "git", "add", "-A")
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to stage changes: %w", err)
|
||||
}
|
||||
} else {
|
||||
// Stage specific files
|
||||
args := append([]string{"add"}, files...)
|
||||
cmd := exec.CommandContext(ctx, "git", args...)
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to stage files: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Check if there are changes to commit
|
||||
cmd := exec.CommandContext(ctx, "git", "diff", "--cached", "--quiet")
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err == nil {
|
||||
return fmt.Errorf("no changes to commit")
|
||||
}
|
||||
|
||||
// Commit
|
||||
cmd = exec.CommandContext(ctx, "git", "commit", "-m", message)
|
||||
cmd.Dir = workDir
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to commit: %s: %w", stderr.String(), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// pushToFork pushes the branch to the user's fork.
|
||||
func (s *SubmitService) pushToFork(workDir, forkOwner, repoName, branch string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
// Add fork as remote if not exists
|
||||
forkRemote := "fork"
|
||||
cmd := exec.CommandContext(ctx, "git", "remote", "get-url", forkRemote)
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
// Construct fork URL using the forge instance URL
|
||||
forkURL := fmt.Sprintf("%s/%s/%s.git", strings.TrimRight(s.forge.URL(), "/"), forkOwner, repoName)
|
||||
|
||||
// Embed token for HTTPS push auth
|
||||
if s.forge.Token() != "" {
|
||||
forkURL = strings.Replace(forkURL, "://", fmt.Sprintf("://bugseti:%s@", s.forge.Token()), 1)
|
||||
}
|
||||
|
||||
cmd = exec.CommandContext(ctx, "git", "remote", "add", forkRemote, forkURL)
|
||||
cmd.Dir = workDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to add fork remote: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Push to fork
|
||||
cmd = exec.CommandContext(ctx, "git", "push", "-u", forkRemote, branch)
|
||||
cmd.Dir = workDir
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to push: %s: %w", stderr.String(), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// createPR creates a pull request using the Forgejo API.
|
||||
func (s *SubmitService) createPR(owner, repo, forkOwner, branch, title, body string) (string, int, error) {
|
||||
pr, err := s.forge.CreatePullRequest(owner, repo, forgejo.CreatePullRequestOption{
|
||||
Head: fmt.Sprintf("%s:%s", forkOwner, branch),
|
||||
Base: "main",
|
||||
Title: title,
|
||||
Body: body,
|
||||
})
|
||||
if err != nil {
|
||||
return "", 0, fmt.Errorf("failed to create PR: %w", err)
|
||||
}
|
||||
|
||||
return pr.HTMLURL, int(pr.Index), nil
|
||||
}
|
||||
|
||||
// generatePRBody creates a default PR body for an issue.
|
||||
func (s *SubmitService) generatePRBody(issue *Issue) string {
|
||||
var body strings.Builder
|
||||
|
||||
body.WriteString("## Summary\n\n")
|
||||
body.WriteString(fmt.Sprintf("This PR addresses issue #%d.\n\n", issue.Number))
|
||||
|
||||
if issue.Context != nil && issue.Context.Summary != "" {
|
||||
body.WriteString("## Context\n\n")
|
||||
body.WriteString(issue.Context.Summary)
|
||||
body.WriteString("\n\n")
|
||||
}
|
||||
|
||||
body.WriteString("## Changes\n\n")
|
||||
body.WriteString("<!-- Describe your changes here -->\n\n")
|
||||
|
||||
body.WriteString("## Testing\n\n")
|
||||
body.WriteString("<!-- Describe how you tested your changes -->\n\n")
|
||||
|
||||
body.WriteString("---\n\n")
|
||||
body.WriteString("*Submitted via [BugSETI](https://forge.lthn.ai/core/cli) - Distributed Bug Fixing*\n")
|
||||
|
||||
return body.String()
|
||||
}
|
||||
|
||||
// GetPRStatus checks the status of a submitted PR.
|
||||
func (s *SubmitService) GetPRStatus(repo string, prNumber int) (*PRStatus, error) {
|
||||
owner, repoName, err := splitRepo(repo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pr, err := s.forge.GetPullRequest(owner, repoName, int64(prNumber))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get PR status: %w", err)
|
||||
}
|
||||
|
||||
status := &PRStatus{
|
||||
State: string(pr.State),
|
||||
Mergeable: pr.Mergeable,
|
||||
}
|
||||
|
||||
// Check CI status via combined commit status
|
||||
if pr.Head != nil {
|
||||
combined, err := s.forge.GetCombinedStatus(owner, repoName, pr.Head.Sha)
|
||||
if err == nil && combined != nil {
|
||||
status.CIPassing = combined.State == forgejo.StatusSuccess
|
||||
}
|
||||
}
|
||||
|
||||
// Check review status
|
||||
reviews, err := s.forge.ListPRReviews(owner, repoName, int64(prNumber))
|
||||
if err == nil {
|
||||
for _, review := range reviews {
|
||||
if review.State == forgejo.ReviewStateApproved {
|
||||
status.Approved = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return status, nil
|
||||
}
|
||||
|
||||
// PRStatus represents the current status of a PR.
|
||||
type PRStatus struct {
|
||||
State string `json:"state"`
|
||||
Mergeable bool `json:"mergeable"`
|
||||
CIPassing bool `json:"ciPassing"`
|
||||
Approved bool `json:"approved"`
|
||||
}
|
||||
|
|
@ -1,234 +0,0 @@
|
|||
package bugseti
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func testSubmitService(t *testing.T) *SubmitService {
|
||||
t.Helper()
|
||||
cfg := testConfigService(t, nil, nil)
|
||||
notify := &NotifyService{enabled: false, config: cfg}
|
||||
stats := &StatsService{
|
||||
config: cfg,
|
||||
stats: &Stats{
|
||||
ReposContributed: make(map[string]*RepoStats),
|
||||
DailyActivity: make(map[string]*DayStats),
|
||||
},
|
||||
}
|
||||
return NewSubmitService(cfg, notify, stats, nil)
|
||||
}
|
||||
|
||||
// --- NewSubmitService / ServiceName ---
|
||||
|
||||
func TestNewSubmitService_Good(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
if s == nil {
|
||||
t.Fatal("expected non-nil SubmitService")
|
||||
}
|
||||
if s.config == nil || s.notify == nil || s.stats == nil {
|
||||
t.Fatal("expected all dependencies set")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceName_Good(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
if got := s.ServiceName(); got != "SubmitService" {
|
||||
t.Fatalf("expected %q, got %q", "SubmitService", got)
|
||||
}
|
||||
}
|
||||
|
||||
// --- Submit validation ---
|
||||
|
||||
func TestSubmit_Bad_NilSubmission(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
_, err := s.Submit(nil)
|
||||
if err == nil {
|
||||
t.Fatal("expected error for nil submission")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "invalid submission") {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmit_Bad_NilIssue(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
_, err := s.Submit(&PRSubmission{Issue: nil})
|
||||
if err == nil {
|
||||
t.Fatal("expected error for nil issue")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "invalid submission") {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmit_Bad_EmptyWorkDir(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
_, err := s.Submit(&PRSubmission{
|
||||
Issue: &Issue{Number: 1, Repo: "owner/repo", Title: "test"},
|
||||
WorkDir: "",
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatal("expected error for empty work directory")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "work directory not specified") {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// --- generatePRBody ---
|
||||
|
||||
func TestGeneratePRBody_Good_Basic(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
issue := &Issue{Number: 42, Repo: "owner/repo", Title: "A bug"}
|
||||
body := s.generatePRBody(issue)
|
||||
|
||||
if !strings.Contains(body, "#42") {
|
||||
t.Fatal("PR body should reference issue number")
|
||||
}
|
||||
if !strings.Contains(body, "## Summary") {
|
||||
t.Fatal("PR body should have Summary section")
|
||||
}
|
||||
if !strings.Contains(body, "## Changes") {
|
||||
t.Fatal("PR body should have Changes section")
|
||||
}
|
||||
if !strings.Contains(body, "## Testing") {
|
||||
t.Fatal("PR body should have Testing section")
|
||||
}
|
||||
if !strings.Contains(body, "BugSETI") {
|
||||
t.Fatal("PR body should have BugSETI attribution")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGeneratePRBody_Good_WithContext(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
issue := &Issue{
|
||||
Number: 7,
|
||||
Repo: "owner/repo",
|
||||
Title: "Fix login",
|
||||
Context: &IssueContext{
|
||||
Summary: "The login endpoint returns 500 on empty password.",
|
||||
},
|
||||
}
|
||||
body := s.generatePRBody(issue)
|
||||
|
||||
if !strings.Contains(body, "## Context") {
|
||||
t.Fatal("PR body should have Context section when context exists")
|
||||
}
|
||||
if !strings.Contains(body, "login endpoint returns 500") {
|
||||
t.Fatal("PR body should include context summary")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGeneratePRBody_Good_WithoutContext(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
issue := &Issue{Number: 7, Repo: "owner/repo", Title: "Fix login"}
|
||||
body := s.generatePRBody(issue)
|
||||
|
||||
if strings.Contains(body, "## Context") {
|
||||
t.Fatal("PR body should omit Context section when no context")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGeneratePRBody_Good_EmptyContextSummary(t *testing.T) {
|
||||
s := testSubmitService(t)
|
||||
issue := &Issue{
|
||||
Number: 7,
|
||||
Repo: "owner/repo",
|
||||
Title: "Fix login",
|
||||
Context: &IssueContext{Summary: ""},
|
||||
}
|
||||
body := s.generatePRBody(issue)
|
||||
|
||||
if strings.Contains(body, "## Context") {
|
||||
t.Fatal("PR body should omit Context section when summary is empty")
|
||||
}
|
||||
}
|
||||
|
||||
// --- PRSubmission / PRResult struct tests ---
|
||||
|
||||
func TestPRSubmission_Good_Defaults(t *testing.T) {
|
||||
sub := &PRSubmission{
|
||||
Issue: &Issue{Number: 10, Repo: "o/r"},
|
||||
WorkDir: "/tmp/work",
|
||||
}
|
||||
if sub.Branch != "" {
|
||||
t.Fatal("expected empty branch to be default")
|
||||
}
|
||||
if sub.Title != "" {
|
||||
t.Fatal("expected empty title to be default")
|
||||
}
|
||||
if sub.CommitMsg != "" {
|
||||
t.Fatal("expected empty commit msg to be default")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPRResult_Good_Success(t *testing.T) {
|
||||
r := &PRResult{
|
||||
Success: true,
|
||||
PRURL: "https://forge.lthn.ai/o/r/pulls/1",
|
||||
PRNumber: 1,
|
||||
ForkOwner: "me",
|
||||
}
|
||||
if !r.Success {
|
||||
t.Fatal("expected success")
|
||||
}
|
||||
if r.Error != "" {
|
||||
t.Fatal("expected no error on success")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPRResult_Good_Failure(t *testing.T) {
|
||||
r := &PRResult{
|
||||
Success: false,
|
||||
Error: "fork failed: something",
|
||||
}
|
||||
if r.Success {
|
||||
t.Fatal("expected failure")
|
||||
}
|
||||
if r.Error == "" {
|
||||
t.Fatal("expected error message")
|
||||
}
|
||||
}
|
||||
|
||||
// --- PRStatus struct ---
|
||||
|
||||
func TestPRStatus_Good(t *testing.T) {
|
||||
s := &PRStatus{
|
||||
State: "open",
|
||||
Mergeable: true,
|
||||
CIPassing: true,
|
||||
Approved: false,
|
||||
}
|
||||
if s.State != "open" {
|
||||
t.Fatalf("expected open, got %s", s.State)
|
||||
}
|
||||
if !s.Mergeable {
|
||||
t.Fatal("expected mergeable")
|
||||
}
|
||||
if s.Approved {
|
||||
t.Fatal("expected not approved")
|
||||
}
|
||||
}
|
||||
|
||||
// --- splitRepo ---
|
||||
|
||||
func TestSplitRepo_Good(t *testing.T) {
|
||||
owner, repo, err := splitRepo("myorg/myrepo")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if owner != "myorg" || repo != "myrepo" {
|
||||
t.Fatalf("expected myorg/myrepo, got %s/%s", owner, repo)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSplitRepo_Bad(t *testing.T) {
|
||||
_, _, err := splitRepo("invalidrepo")
|
||||
if err == nil {
|
||||
t.Fatal("expected error for invalid repo format")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "invalid repo format") {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
// Package updater provides auto-update functionality for BugSETI.
|
||||
package updater
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Channel represents an update channel.
|
||||
type Channel string
|
||||
|
||||
const (
|
||||
// ChannelStable is the production release channel.
|
||||
// Tags: bugseti-vX.Y.Z (e.g., bugseti-v1.0.0)
|
||||
ChannelStable Channel = "stable"
|
||||
|
||||
// ChannelBeta is the pre-release testing channel.
|
||||
// Tags: bugseti-vX.Y.Z-beta.N (e.g., bugseti-v1.0.0-beta.1)
|
||||
ChannelBeta Channel = "beta"
|
||||
|
||||
// ChannelNightly is the latest development builds channel.
|
||||
// Tags: bugseti-nightly-YYYYMMDD (e.g., bugseti-nightly-20260205)
|
||||
ChannelNightly Channel = "nightly"
|
||||
)
|
||||
|
||||
// String returns the string representation of the channel.
|
||||
func (c Channel) String() string {
|
||||
return string(c)
|
||||
}
|
||||
|
||||
// DisplayName returns a human-readable name for the channel.
|
||||
func (c Channel) DisplayName() string {
|
||||
switch c {
|
||||
case ChannelStable:
|
||||
return "Stable"
|
||||
case ChannelBeta:
|
||||
return "Beta"
|
||||
case ChannelNightly:
|
||||
return "Nightly"
|
||||
default:
|
||||
return "Unknown"
|
||||
}
|
||||
}
|
||||
|
||||
// Description returns a description of the channel.
|
||||
func (c Channel) Description() string {
|
||||
switch c {
|
||||
case ChannelStable:
|
||||
return "Production releases - most stable, recommended for most users"
|
||||
case ChannelBeta:
|
||||
return "Pre-release builds - new features being tested before stable release"
|
||||
case ChannelNightly:
|
||||
return "Latest development builds - bleeding edge, may be unstable"
|
||||
default:
|
||||
return "Unknown channel"
|
||||
}
|
||||
}
|
||||
|
||||
// TagPrefix returns the tag prefix used for this channel.
|
||||
func (c Channel) TagPrefix() string {
|
||||
switch c {
|
||||
case ChannelStable:
|
||||
return "bugseti-v"
|
||||
case ChannelBeta:
|
||||
return "bugseti-v"
|
||||
case ChannelNightly:
|
||||
return "bugseti-nightly-"
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
// TagPattern returns a regex pattern to match tags for this channel.
|
||||
func (c Channel) TagPattern() *regexp.Regexp {
|
||||
switch c {
|
||||
case ChannelStable:
|
||||
// Match bugseti-vX.Y.Z but NOT bugseti-vX.Y.Z-beta.N
|
||||
return regexp.MustCompile(`^bugseti-v(\d+\.\d+\.\d+)$`)
|
||||
case ChannelBeta:
|
||||
// Match bugseti-vX.Y.Z-beta.N
|
||||
return regexp.MustCompile(`^bugseti-v(\d+\.\d+\.\d+-beta\.\d+)$`)
|
||||
case ChannelNightly:
|
||||
// Match bugseti-nightly-YYYYMMDD
|
||||
return regexp.MustCompile(`^bugseti-nightly-(\d{8})$`)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// MatchesTag returns true if the given tag matches this channel's pattern.
|
||||
func (c Channel) MatchesTag(tag string) bool {
|
||||
pattern := c.TagPattern()
|
||||
if pattern == nil {
|
||||
return false
|
||||
}
|
||||
return pattern.MatchString(tag)
|
||||
}
|
||||
|
||||
// ExtractVersion extracts the version from a tag for this channel.
|
||||
func (c Channel) ExtractVersion(tag string) string {
|
||||
pattern := c.TagPattern()
|
||||
if pattern == nil {
|
||||
return ""
|
||||
}
|
||||
matches := pattern.FindStringSubmatch(tag)
|
||||
if len(matches) < 2 {
|
||||
return ""
|
||||
}
|
||||
return matches[1]
|
||||
}
|
||||
|
||||
// AllChannels returns all available channels.
|
||||
func AllChannels() []Channel {
|
||||
return []Channel{ChannelStable, ChannelBeta, ChannelNightly}
|
||||
}
|
||||
|
||||
// ParseChannel parses a string into a Channel.
|
||||
func ParseChannel(s string) (Channel, error) {
|
||||
switch strings.ToLower(s) {
|
||||
case "stable":
|
||||
return ChannelStable, nil
|
||||
case "beta":
|
||||
return ChannelBeta, nil
|
||||
case "nightly":
|
||||
return ChannelNightly, nil
|
||||
default:
|
||||
return "", fmt.Errorf("unknown channel: %s", s)
|
||||
}
|
||||
}
|
||||
|
||||
// ChannelInfo contains information about an update channel.
|
||||
type ChannelInfo struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
|
||||
// GetChannelInfo returns information about a channel.
|
||||
func GetChannelInfo(c Channel) ChannelInfo {
|
||||
return ChannelInfo{
|
||||
ID: c.String(),
|
||||
Name: c.DisplayName(),
|
||||
Description: c.Description(),
|
||||
}
|
||||
}
|
||||
|
||||
// GetAllChannelInfo returns information about all channels.
|
||||
func GetAllChannelInfo() []ChannelInfo {
|
||||
channels := AllChannels()
|
||||
info := make([]ChannelInfo, len(channels))
|
||||
for i, c := range channels {
|
||||
info[i] = GetChannelInfo(c)
|
||||
}
|
||||
return info
|
||||
}
|
||||
|
||||
// IncludesPrerelease returns true if the channel includes pre-release versions.
|
||||
func (c Channel) IncludesPrerelease() bool {
|
||||
return c == ChannelBeta || c == ChannelNightly
|
||||
}
|
||||
|
||||
// IncludesChannel returns true if this channel should include releases from the given channel.
|
||||
// For example, beta channel includes stable releases, nightly includes both.
|
||||
func (c Channel) IncludesChannel(other Channel) bool {
|
||||
switch c {
|
||||
case ChannelStable:
|
||||
return other == ChannelStable
|
||||
case ChannelBeta:
|
||||
return other == ChannelStable || other == ChannelBeta
|
||||
case ChannelNightly:
|
||||
return true // Nightly users can see all releases
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
|
@ -1,379 +0,0 @@
|
|||
// Package updater provides auto-update functionality for BugSETI.
|
||||
package updater
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"golang.org/x/mod/semver"
|
||||
)
|
||||
|
||||
const (
|
||||
// GitHubReleasesAPI is the GitHub API endpoint for releases.
|
||||
GitHubReleasesAPI = "https://api.github.com/repos/%s/%s/releases"
|
||||
|
||||
// DefaultOwner is the default GitHub repository owner.
|
||||
DefaultOwner = "host-uk"
|
||||
|
||||
// DefaultRepo is the default GitHub repository name.
|
||||
DefaultRepo = "core"
|
||||
|
||||
// DefaultCheckInterval is the default interval between update checks.
|
||||
DefaultCheckInterval = 6 * time.Hour
|
||||
)
|
||||
|
||||
// GitHubRelease represents a GitHub release from the API.
|
||||
type GitHubRelease struct {
|
||||
TagName string `json:"tag_name"`
|
||||
Name string `json:"name"`
|
||||
Body string `json:"body"`
|
||||
Draft bool `json:"draft"`
|
||||
Prerelease bool `json:"prerelease"`
|
||||
PublishedAt time.Time `json:"published_at"`
|
||||
Assets []GitHubAsset `json:"assets"`
|
||||
HTMLURL string `json:"html_url"`
|
||||
}
|
||||
|
||||
// GitHubAsset represents a release asset from the GitHub API.
|
||||
type GitHubAsset struct {
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
BrowserDownloadURL string `json:"browser_download_url"`
|
||||
ContentType string `json:"content_type"`
|
||||
}
|
||||
|
||||
// ReleaseInfo contains information about an available release.
|
||||
type ReleaseInfo struct {
|
||||
Version string `json:"version"`
|
||||
Channel Channel `json:"channel"`
|
||||
Tag string `json:"tag"`
|
||||
Name string `json:"name"`
|
||||
Body string `json:"body"`
|
||||
PublishedAt time.Time `json:"publishedAt"`
|
||||
HTMLURL string `json:"htmlUrl"`
|
||||
BinaryURL string `json:"binaryUrl"`
|
||||
ArchiveURL string `json:"archiveUrl"`
|
||||
ChecksumURL string `json:"checksumUrl"`
|
||||
Size int64 `json:"size"`
|
||||
}
|
||||
|
||||
// UpdateCheckResult contains the result of an update check.
|
||||
type UpdateCheckResult struct {
|
||||
Available bool `json:"available"`
|
||||
CurrentVersion string `json:"currentVersion"`
|
||||
LatestVersion string `json:"latestVersion"`
|
||||
Release *ReleaseInfo `json:"release,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
CheckedAt time.Time `json:"checkedAt"`
|
||||
}
|
||||
|
||||
// Checker checks for available updates.
|
||||
type Checker struct {
|
||||
owner string
|
||||
repo string
|
||||
httpClient *http.Client
|
||||
}
|
||||
|
||||
// NewChecker creates a new update checker.
|
||||
func NewChecker() *Checker {
|
||||
return &Checker{
|
||||
owner: DefaultOwner,
|
||||
repo: DefaultRepo,
|
||||
httpClient: &http.Client{
|
||||
Timeout: 30 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// CheckForUpdate checks if a newer version is available.
|
||||
func (c *Checker) CheckForUpdate(ctx context.Context, currentVersion string, channel Channel) (*UpdateCheckResult, error) {
|
||||
result := &UpdateCheckResult{
|
||||
CurrentVersion: currentVersion,
|
||||
CheckedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Fetch releases from GitHub
|
||||
releases, err := c.fetchReleases(ctx)
|
||||
if err != nil {
|
||||
result.Error = err.Error()
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Find the latest release for the channel
|
||||
latest := c.findLatestRelease(releases, channel)
|
||||
if latest == nil {
|
||||
result.LatestVersion = currentVersion
|
||||
return result, nil
|
||||
}
|
||||
|
||||
result.LatestVersion = latest.Version
|
||||
result.Release = latest
|
||||
|
||||
// Compare versions
|
||||
if c.isNewerVersion(currentVersion, latest.Version, channel) {
|
||||
result.Available = true
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// fetchReleases fetches all releases from GitHub.
|
||||
func (c *Checker) fetchReleases(ctx context.Context) ([]GitHubRelease, error) {
|
||||
url := fmt.Sprintf(GitHubReleasesAPI, c.owner, c.repo)
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Accept", "application/vnd.github.v3+json")
|
||||
req.Header.Set("User-Agent", "BugSETI-Updater")
|
||||
|
||||
resp, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to fetch releases: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("GitHub API returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var releases []GitHubRelease
|
||||
if err := json.NewDecoder(resp.Body).Decode(&releases); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode releases: %w", err)
|
||||
}
|
||||
|
||||
return releases, nil
|
||||
}
|
||||
|
||||
// findLatestRelease finds the latest release for the given channel.
|
||||
func (c *Checker) findLatestRelease(releases []GitHubRelease, channel Channel) *ReleaseInfo {
|
||||
var candidates []ReleaseInfo
|
||||
|
||||
for _, release := range releases {
|
||||
// Skip drafts
|
||||
if release.Draft {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if the tag matches our BugSETI release pattern
|
||||
if !strings.HasPrefix(release.TagName, "bugseti-") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Determine the channel for this release
|
||||
releaseChannel := c.determineChannel(release.TagName)
|
||||
if releaseChannel == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this release should be considered for the requested channel
|
||||
if !channel.IncludesChannel(releaseChannel) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract version
|
||||
version := releaseChannel.ExtractVersion(release.TagName)
|
||||
if version == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Find the appropriate asset for this platform
|
||||
binaryName := c.getBinaryName()
|
||||
archiveName := c.getArchiveName()
|
||||
checksumName := archiveName + ".sha256"
|
||||
|
||||
var binaryURL, archiveURL, checksumURL string
|
||||
var size int64
|
||||
|
||||
for _, asset := range release.Assets {
|
||||
switch asset.Name {
|
||||
case binaryName:
|
||||
binaryURL = asset.BrowserDownloadURL
|
||||
size = asset.Size
|
||||
case archiveName:
|
||||
archiveURL = asset.BrowserDownloadURL
|
||||
if size == 0 {
|
||||
size = asset.Size
|
||||
}
|
||||
case checksumName:
|
||||
checksumURL = asset.BrowserDownloadURL
|
||||
}
|
||||
}
|
||||
|
||||
// Skip if no binary available for this platform
|
||||
if binaryURL == "" && archiveURL == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
candidates = append(candidates, ReleaseInfo{
|
||||
Version: version,
|
||||
Channel: releaseChannel,
|
||||
Tag: release.TagName,
|
||||
Name: release.Name,
|
||||
Body: release.Body,
|
||||
PublishedAt: release.PublishedAt,
|
||||
HTMLURL: release.HTMLURL,
|
||||
BinaryURL: binaryURL,
|
||||
ArchiveURL: archiveURL,
|
||||
ChecksumURL: checksumURL,
|
||||
Size: size,
|
||||
})
|
||||
}
|
||||
|
||||
if len(candidates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sort by version (newest first)
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
return c.compareVersions(candidates[i].Version, candidates[j].Version, channel) > 0
|
||||
})
|
||||
|
||||
return &candidates[0]
|
||||
}
|
||||
|
||||
// determineChannel determines the channel from a release tag.
|
||||
func (c *Checker) determineChannel(tag string) Channel {
|
||||
for _, ch := range AllChannels() {
|
||||
if ch.MatchesTag(tag) {
|
||||
return ch
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// getBinaryName returns the binary name for the current platform.
|
||||
func (c *Checker) getBinaryName() string {
|
||||
ext := ""
|
||||
if runtime.GOOS == "windows" {
|
||||
ext = ".exe"
|
||||
}
|
||||
return fmt.Sprintf("bugseti-%s-%s%s", runtime.GOOS, runtime.GOARCH, ext)
|
||||
}
|
||||
|
||||
// getArchiveName returns the archive name for the current platform.
|
||||
func (c *Checker) getArchiveName() string {
|
||||
ext := "tar.gz"
|
||||
if runtime.GOOS == "windows" {
|
||||
ext = "zip"
|
||||
}
|
||||
return fmt.Sprintf("bugseti-%s-%s.%s", runtime.GOOS, runtime.GOARCH, ext)
|
||||
}
|
||||
|
||||
// isNewerVersion returns true if newVersion is newer than currentVersion.
|
||||
func (c *Checker) isNewerVersion(currentVersion, newVersion string, channel Channel) bool {
|
||||
// Handle nightly versions (date-based)
|
||||
if channel == ChannelNightly {
|
||||
return newVersion > currentVersion
|
||||
}
|
||||
|
||||
// Handle dev builds
|
||||
if currentVersion == "dev" {
|
||||
return true
|
||||
}
|
||||
|
||||
// Use semver comparison
|
||||
current := c.normalizeSemver(currentVersion)
|
||||
new := c.normalizeSemver(newVersion)
|
||||
|
||||
return semver.Compare(new, current) > 0
|
||||
}
|
||||
|
||||
// compareVersions compares two versions.
|
||||
func (c *Checker) compareVersions(v1, v2 string, channel Channel) int {
|
||||
// Handle nightly versions (date-based)
|
||||
if channel == ChannelNightly {
|
||||
if v1 > v2 {
|
||||
return 1
|
||||
} else if v1 < v2 {
|
||||
return -1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// Use semver comparison
|
||||
return semver.Compare(c.normalizeSemver(v1), c.normalizeSemver(v2))
|
||||
}
|
||||
|
||||
// normalizeSemver ensures a version string has the 'v' prefix for semver.
|
||||
func (c *Checker) normalizeSemver(version string) string {
|
||||
if !strings.HasPrefix(version, "v") {
|
||||
return "v" + version
|
||||
}
|
||||
return version
|
||||
}
|
||||
|
||||
// GetAllReleases returns all BugSETI releases from GitHub.
|
||||
func (c *Checker) GetAllReleases(ctx context.Context) ([]ReleaseInfo, error) {
|
||||
releases, err := c.fetchReleases(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var result []ReleaseInfo
|
||||
for _, release := range releases {
|
||||
if release.Draft {
|
||||
continue
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(release.TagName, "bugseti-") {
|
||||
continue
|
||||
}
|
||||
|
||||
releaseChannel := c.determineChannel(release.TagName)
|
||||
if releaseChannel == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
version := releaseChannel.ExtractVersion(release.TagName)
|
||||
if version == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
binaryName := c.getBinaryName()
|
||||
archiveName := c.getArchiveName()
|
||||
checksumName := archiveName + ".sha256"
|
||||
|
||||
var binaryURL, archiveURL, checksumURL string
|
||||
var size int64
|
||||
|
||||
for _, asset := range release.Assets {
|
||||
switch asset.Name {
|
||||
case binaryName:
|
||||
binaryURL = asset.BrowserDownloadURL
|
||||
size = asset.Size
|
||||
case archiveName:
|
||||
archiveURL = asset.BrowserDownloadURL
|
||||
if size == 0 {
|
||||
size = asset.Size
|
||||
}
|
||||
case checksumName:
|
||||
checksumURL = asset.BrowserDownloadURL
|
||||
}
|
||||
}
|
||||
|
||||
result = append(result, ReleaseInfo{
|
||||
Version: version,
|
||||
Channel: releaseChannel,
|
||||
Tag: release.TagName,
|
||||
Name: release.Name,
|
||||
Body: release.Body,
|
||||
PublishedAt: release.PublishedAt,
|
||||
HTMLURL: release.HTMLURL,
|
||||
BinaryURL: binaryURL,
|
||||
ArchiveURL: archiveURL,
|
||||
ChecksumURL: checksumURL,
|
||||
Size: size,
|
||||
})
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
|
@ -1,427 +0,0 @@
|
|||
// Package updater provides auto-update functionality for BugSETI.
|
||||
package updater
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// DownloadProgress reports download progress.
|
||||
type DownloadProgress struct {
|
||||
BytesDownloaded int64 `json:"bytesDownloaded"`
|
||||
TotalBytes int64 `json:"totalBytes"`
|
||||
Percent float64 `json:"percent"`
|
||||
}
|
||||
|
||||
// DownloadResult contains the result of a download operation.
|
||||
type DownloadResult struct {
|
||||
BinaryPath string `json:"binaryPath"`
|
||||
Version string `json:"version"`
|
||||
Checksum string `json:"checksum"`
|
||||
VerifiedOK bool `json:"verifiedOK"`
|
||||
}
|
||||
|
||||
// Downloader handles downloading and verifying updates.
|
||||
type Downloader struct {
|
||||
httpClient *http.Client
|
||||
stagingDir string
|
||||
onProgress func(DownloadProgress)
|
||||
}
|
||||
|
||||
// NewDownloader creates a new update downloader.
|
||||
func NewDownloader() (*Downloader, error) {
|
||||
// Create staging directory in user's temp dir
|
||||
stagingDir := filepath.Join(os.TempDir(), "bugseti-updates")
|
||||
if err := os.MkdirAll(stagingDir, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create staging directory: %w", err)
|
||||
}
|
||||
|
||||
return &Downloader{
|
||||
httpClient: &http.Client{},
|
||||
stagingDir: stagingDir,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SetProgressCallback sets a callback for download progress updates.
|
||||
func (d *Downloader) SetProgressCallback(cb func(DownloadProgress)) {
|
||||
d.onProgress = cb
|
||||
}
|
||||
|
||||
// Download downloads a release and stages it for installation.
|
||||
func (d *Downloader) Download(ctx context.Context, release *ReleaseInfo) (*DownloadResult, error) {
|
||||
result := &DownloadResult{
|
||||
Version: release.Version,
|
||||
}
|
||||
|
||||
// Prefer archive download for extraction
|
||||
downloadURL := release.ArchiveURL
|
||||
if downloadURL == "" {
|
||||
downloadURL = release.BinaryURL
|
||||
}
|
||||
if downloadURL == "" {
|
||||
return nil, fmt.Errorf("no download URL available for release %s", release.Version)
|
||||
}
|
||||
|
||||
// Download the checksum first if available
|
||||
var expectedChecksum string
|
||||
if release.ChecksumURL != "" {
|
||||
checksum, err := d.downloadChecksum(ctx, release.ChecksumURL)
|
||||
if err != nil {
|
||||
// Log but don't fail - checksum verification is optional
|
||||
fmt.Printf("Warning: could not download checksum: %v\n", err)
|
||||
} else {
|
||||
expectedChecksum = checksum
|
||||
}
|
||||
}
|
||||
|
||||
// Download the file
|
||||
downloadedPath, err := d.downloadFile(ctx, downloadURL, release.Size)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to download update: %w", err)
|
||||
}
|
||||
|
||||
// Verify checksum if available
|
||||
actualChecksum, err := d.calculateChecksum(downloadedPath)
|
||||
if err != nil {
|
||||
os.Remove(downloadedPath)
|
||||
return nil, fmt.Errorf("failed to calculate checksum: %w", err)
|
||||
}
|
||||
result.Checksum = actualChecksum
|
||||
|
||||
if expectedChecksum != "" {
|
||||
if actualChecksum != expectedChecksum {
|
||||
os.Remove(downloadedPath)
|
||||
return nil, fmt.Errorf("checksum mismatch: expected %s, got %s", expectedChecksum, actualChecksum)
|
||||
}
|
||||
result.VerifiedOK = true
|
||||
}
|
||||
|
||||
// Extract if it's an archive
|
||||
var binaryPath string
|
||||
if strings.HasSuffix(downloadURL, ".tar.gz") {
|
||||
binaryPath, err = d.extractTarGz(downloadedPath)
|
||||
} else if strings.HasSuffix(downloadURL, ".zip") {
|
||||
binaryPath, err = d.extractZip(downloadedPath)
|
||||
} else {
|
||||
// It's a raw binary
|
||||
binaryPath = downloadedPath
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
os.Remove(downloadedPath)
|
||||
return nil, fmt.Errorf("failed to extract archive: %w", err)
|
||||
}
|
||||
|
||||
// Make the binary executable (Unix only)
|
||||
if runtime.GOOS != "windows" {
|
||||
if err := os.Chmod(binaryPath, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to make binary executable: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
result.BinaryPath = binaryPath
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// downloadChecksum downloads and parses a checksum file.
|
||||
func (d *Downloader) downloadChecksum(ctx context.Context, url string) (string, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
req.Header.Set("User-Agent", "BugSETI-Updater")
|
||||
|
||||
resp, err := d.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("HTTP %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Checksum file format: "hash filename" or just "hash"
|
||||
parts := strings.Fields(strings.TrimSpace(string(data)))
|
||||
if len(parts) == 0 {
|
||||
return "", fmt.Errorf("empty checksum file")
|
||||
}
|
||||
|
||||
return parts[0], nil
|
||||
}
|
||||
|
||||
// downloadFile downloads a file with progress reporting.
|
||||
func (d *Downloader) downloadFile(ctx context.Context, url string, expectedSize int64) (string, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
req.Header.Set("User-Agent", "BugSETI-Updater")
|
||||
|
||||
resp, err := d.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("HTTP %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
// Get total size from response or use expected size
|
||||
totalSize := resp.ContentLength
|
||||
if totalSize <= 0 {
|
||||
totalSize = expectedSize
|
||||
}
|
||||
|
||||
// Create output file
|
||||
filename := filepath.Base(url)
|
||||
outPath := filepath.Join(d.stagingDir, filename)
|
||||
out, err := os.Create(outPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer out.Close()
|
||||
|
||||
// Download with progress
|
||||
var downloaded int64
|
||||
buf := make([]byte, 32*1024) // 32KB buffer
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
os.Remove(outPath)
|
||||
return "", ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
n, readErr := resp.Body.Read(buf)
|
||||
if n > 0 {
|
||||
_, writeErr := out.Write(buf[:n])
|
||||
if writeErr != nil {
|
||||
os.Remove(outPath)
|
||||
return "", writeErr
|
||||
}
|
||||
downloaded += int64(n)
|
||||
|
||||
// Report progress
|
||||
if d.onProgress != nil && totalSize > 0 {
|
||||
d.onProgress(DownloadProgress{
|
||||
BytesDownloaded: downloaded,
|
||||
TotalBytes: totalSize,
|
||||
Percent: float64(downloaded) / float64(totalSize) * 100,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
if readErr != nil {
|
||||
os.Remove(outPath)
|
||||
return "", readErr
|
||||
}
|
||||
}
|
||||
|
||||
return outPath, nil
|
||||
}
|
||||
|
||||
// calculateChecksum calculates the SHA256 checksum of a file.
|
||||
func (d *Downloader) calculateChecksum(path string) (string, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
h := sha256.New()
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
}
|
||||
|
||||
// extractTarGz extracts a .tar.gz archive and returns the path to the binary.
|
||||
func (d *Downloader) extractTarGz(archivePath string) (string, error) {
|
||||
f, err := os.Open(archivePath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
gzr, err := gzip.NewReader(f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer gzr.Close()
|
||||
|
||||
tr := tar.NewReader(gzr)
|
||||
|
||||
extractDir := filepath.Join(d.stagingDir, "extracted")
|
||||
os.RemoveAll(extractDir)
|
||||
if err := os.MkdirAll(extractDir, 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var binaryPath string
|
||||
binaryName := "bugseti"
|
||||
if runtime.GOOS == "windows" {
|
||||
binaryName = "bugseti.exe"
|
||||
}
|
||||
|
||||
for {
|
||||
header, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
target := filepath.Join(extractDir, header.Name)
|
||||
|
||||
// Prevent directory traversal
|
||||
if !strings.HasPrefix(filepath.Clean(target), filepath.Clean(extractDir)) {
|
||||
return "", fmt.Errorf("invalid file path in archive: %s", header.Name)
|
||||
}
|
||||
|
||||
switch header.Typeflag {
|
||||
case tar.TypeDir:
|
||||
if err := os.MkdirAll(target, 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
case tar.TypeReg:
|
||||
// Create parent directory
|
||||
if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
outFile, err := os.OpenFile(target, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(header.Mode))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if _, err := io.Copy(outFile, tr); err != nil {
|
||||
outFile.Close()
|
||||
return "", err
|
||||
}
|
||||
outFile.Close()
|
||||
|
||||
// Check if this is the binary we're looking for
|
||||
if filepath.Base(header.Name) == binaryName {
|
||||
binaryPath = target
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up archive
|
||||
os.Remove(archivePath)
|
||||
|
||||
if binaryPath == "" {
|
||||
return "", fmt.Errorf("binary not found in archive")
|
||||
}
|
||||
|
||||
return binaryPath, nil
|
||||
}
|
||||
|
||||
// extractZip extracts a .zip archive and returns the path to the binary.
|
||||
func (d *Downloader) extractZip(archivePath string) (string, error) {
|
||||
r, err := zip.OpenReader(archivePath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
extractDir := filepath.Join(d.stagingDir, "extracted")
|
||||
os.RemoveAll(extractDir)
|
||||
if err := os.MkdirAll(extractDir, 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var binaryPath string
|
||||
binaryName := "bugseti"
|
||||
if runtime.GOOS == "windows" {
|
||||
binaryName = "bugseti.exe"
|
||||
}
|
||||
|
||||
for _, f := range r.File {
|
||||
target := filepath.Join(extractDir, f.Name)
|
||||
|
||||
// Prevent directory traversal
|
||||
if !strings.HasPrefix(filepath.Clean(target), filepath.Clean(extractDir)) {
|
||||
return "", fmt.Errorf("invalid file path in archive: %s", f.Name)
|
||||
}
|
||||
|
||||
if f.FileInfo().IsDir() {
|
||||
if err := os.MkdirAll(target, 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Create parent directory
|
||||
if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
rc, err := f.Open()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
outFile, err := os.OpenFile(target, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, f.Mode())
|
||||
if err != nil {
|
||||
rc.Close()
|
||||
return "", err
|
||||
}
|
||||
|
||||
_, err = io.Copy(outFile, rc)
|
||||
rc.Close()
|
||||
outFile.Close()
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Check if this is the binary we're looking for
|
||||
if filepath.Base(f.Name) == binaryName {
|
||||
binaryPath = target
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up archive
|
||||
os.Remove(archivePath)
|
||||
|
||||
if binaryPath == "" {
|
||||
return "", fmt.Errorf("binary not found in archive")
|
||||
}
|
||||
|
||||
return binaryPath, nil
|
||||
}
|
||||
|
||||
// Cleanup removes all staged files.
|
||||
func (d *Downloader) Cleanup() error {
|
||||
return os.RemoveAll(d.stagingDir)
|
||||
}
|
||||
|
||||
// GetStagingDir returns the staging directory path.
|
||||
func (d *Downloader) GetStagingDir() string {
|
||||
return d.stagingDir
|
||||
}
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
module forge.lthn.ai/core/cli/internal/bugseti/updater
|
||||
|
||||
go 1.25.5
|
||||
|
||||
require (
|
||||
forge.lthn.ai/core/cli/internal/bugseti v0.0.0
|
||||
golang.org/x/mod v0.32.0
|
||||
)
|
||||
|
||||
require (
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 // indirect
|
||||
github.com/42wim/httpsig v1.2.3 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/davidmz/go-pageant v1.0.2 // indirect
|
||||
github.com/go-fed/httpsig v1.1.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/hashicorp/go-version v1.7.0 // indirect
|
||||
github.com/invopop/jsonschema v0.13.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/mark3labs/mcp-go v0.43.2 // indirect
|
||||
github.com/spf13/cast v1.10.0 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||
golang.org/x/crypto v0.47.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
||||
replace forge.lthn.ai/core/cli/internal/bugseti => ../
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 h1:HTCWpzyWQOHDWt3LzI6/d2jvUDsw/vgGRWm/8BTvcqI=
|
||||
github.com/42wim/httpsig v1.2.3 h1:xb0YyWhkYj57SPtfSttIobJUPJZB9as1nsfo7KWVcEs=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davidmz/go-pageant v1.0.2 h1:bPblRCh5jGU+Uptpz6LgMZGD5hJoOt7otgT454WvHn0=
|
||||
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||
github.com/go-fed/httpsig v1.1.0 h1:9M+hb0jkEICD8/cAiNqEB66R87tTINszBRTjwjQzWcI=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
|
||||
github.com/invopop/jsonschema v0.13.0 h1:KvpoAJWEjR3uD9Kbm2HWJmqsEaHt8lBUpd0qHcIi21E=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
|
||||
github.com/mark3labs/mcp-go v0.43.2 h1:21PUSlWWiSbUPQwXIJ5WKlETixpFpq+WBpbMGDSVy/I=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
|
|
@ -1,284 +0,0 @@
|
|||
// Package updater provides auto-update functionality for BugSETI.
|
||||
package updater
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// InstallResult contains the result of an installation.
|
||||
type InstallResult struct {
|
||||
Success bool `json:"success"`
|
||||
OldPath string `json:"oldPath"`
|
||||
NewPath string `json:"newPath"`
|
||||
BackupPath string `json:"backupPath"`
|
||||
RestartNeeded bool `json:"restartNeeded"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// Installer handles installing updates and restarting the application.
|
||||
type Installer struct {
|
||||
executablePath string
|
||||
}
|
||||
|
||||
// NewInstaller creates a new installer.
|
||||
func NewInstaller() (*Installer, error) {
|
||||
execPath, err := os.Executable()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get executable path: %w", err)
|
||||
}
|
||||
|
||||
// Resolve symlinks to get the real path
|
||||
execPath, err = filepath.EvalSymlinks(execPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to resolve executable path: %w", err)
|
||||
}
|
||||
|
||||
return &Installer{
|
||||
executablePath: execPath,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Install replaces the current binary with the new one.
|
||||
func (i *Installer) Install(newBinaryPath string) (*InstallResult, error) {
|
||||
result := &InstallResult{
|
||||
OldPath: i.executablePath,
|
||||
NewPath: newBinaryPath,
|
||||
RestartNeeded: true,
|
||||
}
|
||||
|
||||
// Verify the new binary exists and is executable
|
||||
if _, err := os.Stat(newBinaryPath); err != nil {
|
||||
result.Error = fmt.Sprintf("new binary not found: %v", err)
|
||||
return result, fmt.Errorf("new binary not found: %w", err)
|
||||
}
|
||||
|
||||
// Create backup of current binary
|
||||
backupPath := i.executablePath + ".bak"
|
||||
result.BackupPath = backupPath
|
||||
|
||||
// Platform-specific installation
|
||||
var err error
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
err = i.installWindows(newBinaryPath, backupPath)
|
||||
default:
|
||||
err = i.installUnix(newBinaryPath, backupPath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
result.Error = err.Error()
|
||||
return result, err
|
||||
}
|
||||
|
||||
result.Success = true
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// installUnix performs the installation on Unix-like systems.
|
||||
func (i *Installer) installUnix(newBinaryPath, backupPath string) error {
|
||||
// Remove old backup if exists
|
||||
os.Remove(backupPath)
|
||||
|
||||
// Rename current binary to backup
|
||||
if err := os.Rename(i.executablePath, backupPath); err != nil {
|
||||
return fmt.Errorf("failed to backup current binary: %w", err)
|
||||
}
|
||||
|
||||
// Copy new binary to target location
|
||||
// We use copy instead of rename in case they're on different filesystems
|
||||
if err := copyFile(newBinaryPath, i.executablePath); err != nil {
|
||||
// Try to restore backup
|
||||
os.Rename(backupPath, i.executablePath)
|
||||
return fmt.Errorf("failed to install new binary: %w", err)
|
||||
}
|
||||
|
||||
// Make executable
|
||||
if err := os.Chmod(i.executablePath, 0755); err != nil {
|
||||
// Try to restore backup
|
||||
os.Remove(i.executablePath)
|
||||
os.Rename(backupPath, i.executablePath)
|
||||
return fmt.Errorf("failed to make binary executable: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// installWindows performs the installation on Windows.
|
||||
// On Windows, we can't replace a running executable, so we use a different approach:
|
||||
// 1. Rename current executable to .old
|
||||
// 2. Copy new executable to target location
|
||||
// 3. On next start, clean up the .old file
|
||||
func (i *Installer) installWindows(newBinaryPath, backupPath string) error {
|
||||
// Remove old backup if exists
|
||||
os.Remove(backupPath)
|
||||
|
||||
// On Windows, we can rename the running executable
|
||||
if err := os.Rename(i.executablePath, backupPath); err != nil {
|
||||
return fmt.Errorf("failed to backup current binary: %w", err)
|
||||
}
|
||||
|
||||
// Copy new binary to target location
|
||||
if err := copyFile(newBinaryPath, i.executablePath); err != nil {
|
||||
// Try to restore backup
|
||||
os.Rename(backupPath, i.executablePath)
|
||||
return fmt.Errorf("failed to install new binary: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Restart restarts the application with the new binary.
|
||||
func (i *Installer) Restart() error {
|
||||
args := os.Args
|
||||
env := os.Environ()
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
return i.restartWindows(args, env)
|
||||
default:
|
||||
return i.restartUnix(args, env)
|
||||
}
|
||||
}
|
||||
|
||||
// restartUnix restarts the application on Unix-like systems using exec.
|
||||
func (i *Installer) restartUnix(args []string, env []string) error {
|
||||
// Use syscall.Exec to replace the current process
|
||||
// This is the cleanest way to restart on Unix
|
||||
return syscall.Exec(i.executablePath, args, env)
|
||||
}
|
||||
|
||||
// restartWindows restarts the application on Windows.
|
||||
func (i *Installer) restartWindows(args []string, env []string) error {
|
||||
// On Windows, we can't use exec to replace the process
|
||||
// Instead, we start a new process and exit the current one
|
||||
cmd := exec.Command(i.executablePath, args[1:]...)
|
||||
cmd.Env = env
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
cmd.Stdin = os.Stdin
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
return fmt.Errorf("failed to start new process: %w", err)
|
||||
}
|
||||
|
||||
// Exit current process
|
||||
os.Exit(0)
|
||||
return nil // Never reached
|
||||
}
|
||||
|
||||
// RestartLater schedules a restart for when the app next starts.
|
||||
// This is useful when the user wants to continue working and restart later.
|
||||
func (i *Installer) RestartLater() error {
|
||||
// Create a marker file that indicates a restart is pending
|
||||
markerPath := filepath.Join(filepath.Dir(i.executablePath), ".bugseti-restart-pending")
|
||||
return os.WriteFile(markerPath, []byte("restart"), 0644)
|
||||
}
|
||||
|
||||
// CheckPendingRestart checks if a restart was scheduled.
|
||||
func (i *Installer) CheckPendingRestart() bool {
|
||||
markerPath := filepath.Join(filepath.Dir(i.executablePath), ".bugseti-restart-pending")
|
||||
_, err := os.Stat(markerPath)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// ClearPendingRestart clears the pending restart marker.
|
||||
func (i *Installer) ClearPendingRestart() error {
|
||||
markerPath := filepath.Join(filepath.Dir(i.executablePath), ".bugseti-restart-pending")
|
||||
return os.Remove(markerPath)
|
||||
}
|
||||
|
||||
// CleanupBackup removes the backup binary after a successful update.
|
||||
func (i *Installer) CleanupBackup() error {
|
||||
backupPath := i.executablePath + ".bak"
|
||||
if _, err := os.Stat(backupPath); err == nil {
|
||||
return os.Remove(backupPath)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Rollback restores the previous version from backup.
|
||||
func (i *Installer) Rollback() error {
|
||||
backupPath := i.executablePath + ".bak"
|
||||
|
||||
// Check if backup exists
|
||||
if _, err := os.Stat(backupPath); err != nil {
|
||||
return fmt.Errorf("backup not found: %w", err)
|
||||
}
|
||||
|
||||
// Remove current binary
|
||||
if err := os.Remove(i.executablePath); err != nil {
|
||||
return fmt.Errorf("failed to remove current binary: %w", err)
|
||||
}
|
||||
|
||||
// Restore backup
|
||||
if err := os.Rename(backupPath, i.executablePath); err != nil {
|
||||
return fmt.Errorf("failed to restore backup: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetExecutablePath returns the path to the current executable.
|
||||
func (i *Installer) GetExecutablePath() string {
|
||||
return i.executablePath
|
||||
}
|
||||
|
||||
// copyFile copies a file from src to dst.
|
||||
func copyFile(src, dst string) error {
|
||||
sourceFile, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer sourceFile.Close()
|
||||
|
||||
// Get source file info for permissions
|
||||
sourceInfo, err := sourceFile.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
destFile, err := os.OpenFile(dst, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, sourceInfo.Mode())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer destFile.Close()
|
||||
|
||||
_, err = destFile.ReadFrom(sourceFile)
|
||||
return err
|
||||
}
|
||||
|
||||
// CanSelfUpdate checks if the application has permission to update itself.
|
||||
func CanSelfUpdate() bool {
|
||||
execPath, err := os.Executable()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
execPath, err = filepath.EvalSymlinks(execPath)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check if we can write to the executable's directory
|
||||
dir := filepath.Dir(execPath)
|
||||
testFile := filepath.Join(dir, ".bugseti-update-test")
|
||||
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
f.Close()
|
||||
os.Remove(testFile)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// NeedsElevation returns true if the update requires elevated privileges.
|
||||
func NeedsElevation() bool {
|
||||
return !CanSelfUpdate()
|
||||
}
|
||||
|
|
@ -1,322 +0,0 @@
|
|||
// Package updater provides auto-update functionality for BugSETI.
|
||||
package updater
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/cli/internal/bugseti"
|
||||
)
|
||||
|
||||
// Service provides update functionality and Wails bindings.
|
||||
type Service struct {
|
||||
config *bugseti.ConfigService
|
||||
checker *Checker
|
||||
downloader *Downloader
|
||||
installer *Installer
|
||||
|
||||
mu sync.RWMutex
|
||||
lastResult *UpdateCheckResult
|
||||
pendingUpdate *DownloadResult
|
||||
|
||||
// Background check
|
||||
stopCh chan struct{}
|
||||
running bool
|
||||
}
|
||||
|
||||
// NewService creates a new update service.
|
||||
func NewService(config *bugseti.ConfigService) (*Service, error) {
|
||||
downloader, err := NewDownloader()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
installer, err := NewInstaller()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Service{
|
||||
config: config,
|
||||
checker: NewChecker(),
|
||||
downloader: downloader,
|
||||
installer: installer,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (s *Service) ServiceName() string {
|
||||
return "UpdateService"
|
||||
}
|
||||
|
||||
// Start begins the background update checker.
|
||||
func (s *Service) Start() {
|
||||
s.mu.Lock()
|
||||
if s.running {
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
s.running = true
|
||||
s.stopCh = make(chan struct{})
|
||||
s.mu.Unlock()
|
||||
|
||||
go s.runBackgroundChecker()
|
||||
}
|
||||
|
||||
// Stop stops the background update checker.
|
||||
func (s *Service) Stop() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.running {
|
||||
return
|
||||
}
|
||||
|
||||
s.running = false
|
||||
close(s.stopCh)
|
||||
}
|
||||
|
||||
// runBackgroundChecker runs periodic update checks.
|
||||
func (s *Service) runBackgroundChecker() {
|
||||
// Initial check after a short delay
|
||||
time.Sleep(30 * time.Second)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-s.stopCh:
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
if s.config.ShouldCheckForUpdates() {
|
||||
log.Println("Checking for updates...")
|
||||
_, err := s.CheckForUpdate()
|
||||
if err != nil {
|
||||
log.Printf("Update check failed: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Check interval from config (minimum 1 hour)
|
||||
interval := time.Duration(s.config.GetUpdateCheckInterval()) * time.Hour
|
||||
if interval < time.Hour {
|
||||
interval = time.Hour
|
||||
}
|
||||
|
||||
select {
|
||||
case <-s.stopCh:
|
||||
return
|
||||
case <-time.After(interval):
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GetSettings returns the update settings.
|
||||
func (s *Service) GetSettings() bugseti.UpdateSettings {
|
||||
return s.config.GetUpdateSettings()
|
||||
}
|
||||
|
||||
// SetSettings updates the update settings.
|
||||
func (s *Service) SetSettings(settings bugseti.UpdateSettings) error {
|
||||
return s.config.SetUpdateSettings(settings)
|
||||
}
|
||||
|
||||
// GetVersionInfo returns the current version information.
|
||||
func (s *Service) GetVersionInfo() bugseti.VersionInfo {
|
||||
return bugseti.GetVersionInfo()
|
||||
}
|
||||
|
||||
// GetChannels returns all available update channels.
|
||||
func (s *Service) GetChannels() []ChannelInfo {
|
||||
return GetAllChannelInfo()
|
||||
}
|
||||
|
||||
// CheckForUpdate checks if an update is available.
|
||||
func (s *Service) CheckForUpdate() (*UpdateCheckResult, error) {
|
||||
currentVersion := bugseti.GetVersion()
|
||||
channelStr := s.config.GetUpdateChannel()
|
||||
|
||||
channel, err := ParseChannel(channelStr)
|
||||
if err != nil {
|
||||
channel = ChannelStable
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
result, err := s.checker.CheckForUpdate(ctx, currentVersion, channel)
|
||||
if err != nil {
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Update last check time
|
||||
s.config.SetLastUpdateCheck(time.Now())
|
||||
|
||||
// Store result
|
||||
s.mu.Lock()
|
||||
s.lastResult = result
|
||||
s.mu.Unlock()
|
||||
|
||||
// If auto-update is enabled and an update is available, download it
|
||||
if result.Available && s.config.IsAutoUpdateEnabled() {
|
||||
go s.downloadUpdate(result.Release)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetLastCheckResult returns the last update check result.
|
||||
func (s *Service) GetLastCheckResult() *UpdateCheckResult {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.lastResult
|
||||
}
|
||||
|
||||
// downloadUpdate downloads an update in the background.
|
||||
func (s *Service) downloadUpdate(release *ReleaseInfo) {
|
||||
if release == nil {
|
||||
return
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
log.Printf("Downloading update %s...", release.Version)
|
||||
|
||||
result, err := s.downloader.Download(ctx, release)
|
||||
if err != nil {
|
||||
log.Printf("Failed to download update: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Printf("Update %s downloaded and staged at %s", release.Version, result.BinaryPath)
|
||||
|
||||
s.mu.Lock()
|
||||
s.pendingUpdate = result
|
||||
s.mu.Unlock()
|
||||
}
|
||||
|
||||
// DownloadUpdate downloads the latest available update.
|
||||
func (s *Service) DownloadUpdate() (*DownloadResult, error) {
|
||||
s.mu.RLock()
|
||||
lastResult := s.lastResult
|
||||
s.mu.RUnlock()
|
||||
|
||||
if lastResult == nil || !lastResult.Available || lastResult.Release == nil {
|
||||
// Need to check first
|
||||
result, err := s.CheckForUpdate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !result.Available {
|
||||
return nil, nil
|
||||
}
|
||||
lastResult = result
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
downloadResult, err := s.downloader.Download(ctx, lastResult.Release)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
s.pendingUpdate = downloadResult
|
||||
s.mu.Unlock()
|
||||
|
||||
return downloadResult, nil
|
||||
}
|
||||
|
||||
// InstallUpdate installs a previously downloaded update.
|
||||
func (s *Service) InstallUpdate() (*InstallResult, error) {
|
||||
s.mu.RLock()
|
||||
pending := s.pendingUpdate
|
||||
s.mu.RUnlock()
|
||||
|
||||
if pending == nil {
|
||||
// Try to download first
|
||||
downloadResult, err := s.DownloadUpdate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if downloadResult == nil {
|
||||
return &InstallResult{
|
||||
Success: false,
|
||||
Error: "No update available",
|
||||
}, nil
|
||||
}
|
||||
pending = downloadResult
|
||||
}
|
||||
|
||||
result, err := s.installer.Install(pending.BinaryPath)
|
||||
if err != nil {
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Clear pending update
|
||||
s.mu.Lock()
|
||||
s.pendingUpdate = nil
|
||||
s.mu.Unlock()
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// InstallAndRestart installs the update and restarts the application.
|
||||
func (s *Service) InstallAndRestart() error {
|
||||
result, err := s.InstallUpdate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !result.Success {
|
||||
return nil
|
||||
}
|
||||
|
||||
return s.installer.Restart()
|
||||
}
|
||||
|
||||
// HasPendingUpdate returns true if there's a downloaded update ready to install.
|
||||
func (s *Service) HasPendingUpdate() bool {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.pendingUpdate != nil
|
||||
}
|
||||
|
||||
// GetPendingUpdate returns information about the pending update.
|
||||
func (s *Service) GetPendingUpdate() *DownloadResult {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.pendingUpdate
|
||||
}
|
||||
|
||||
// CancelPendingUpdate cancels and removes the pending update.
|
||||
func (s *Service) CancelPendingUpdate() error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.pendingUpdate = nil
|
||||
return s.downloader.Cleanup()
|
||||
}
|
||||
|
||||
// CanSelfUpdate returns true if the application can update itself.
|
||||
func (s *Service) CanSelfUpdate() bool {
|
||||
return CanSelfUpdate()
|
||||
}
|
||||
|
||||
// NeedsElevation returns true if the update requires elevated privileges.
|
||||
func (s *Service) NeedsElevation() bool {
|
||||
return NeedsElevation()
|
||||
}
|
||||
|
||||
// Rollback restores the previous version.
|
||||
func (s *Service) Rollback() error {
|
||||
return s.installer.Rollback()
|
||||
}
|
||||
|
||||
// CleanupAfterUpdate cleans up backup files after a successful update.
|
||||
func (s *Service) CleanupAfterUpdate() error {
|
||||
return s.installer.CleanupBackup()
|
||||
}
|
||||
|
|
@ -1,122 +0,0 @@
|
|||
// Package bugseti provides version information for the BugSETI application.
|
||||
package bugseti
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// Version information - these are set at build time via ldflags
|
||||
// Example: go build -ldflags "-X forge.lthn.ai/core/cli/internal/bugseti.Version=1.0.0"
|
||||
var (
|
||||
// Version is the semantic version (e.g., "1.0.0", "1.0.0-beta.1", "nightly-20260205")
|
||||
Version = "dev"
|
||||
|
||||
// Channel is the release channel (stable, beta, nightly)
|
||||
Channel = "dev"
|
||||
|
||||
// Commit is the git commit SHA
|
||||
Commit = "unknown"
|
||||
|
||||
// BuildTime is the UTC build timestamp
|
||||
BuildTime = "unknown"
|
||||
)
|
||||
|
||||
// VersionInfo contains all version-related information.
|
||||
type VersionInfo struct {
|
||||
Version string `json:"version"`
|
||||
Channel string `json:"channel"`
|
||||
Commit string `json:"commit"`
|
||||
BuildTime string `json:"buildTime"`
|
||||
GoVersion string `json:"goVersion"`
|
||||
OS string `json:"os"`
|
||||
Arch string `json:"arch"`
|
||||
}
|
||||
|
||||
// GetVersion returns the current version string.
|
||||
func GetVersion() string {
|
||||
return Version
|
||||
}
|
||||
|
||||
// GetChannel returns the release channel.
|
||||
func GetChannel() string {
|
||||
return Channel
|
||||
}
|
||||
|
||||
// GetVersionInfo returns complete version information.
|
||||
func GetVersionInfo() VersionInfo {
|
||||
return VersionInfo{
|
||||
Version: Version,
|
||||
Channel: Channel,
|
||||
Commit: Commit,
|
||||
BuildTime: BuildTime,
|
||||
GoVersion: runtime.Version(),
|
||||
OS: runtime.GOOS,
|
||||
Arch: runtime.GOARCH,
|
||||
}
|
||||
}
|
||||
|
||||
// GetVersionString returns a formatted version string for display.
|
||||
func GetVersionString() string {
|
||||
if Channel == "dev" {
|
||||
return fmt.Sprintf("BugSETI %s (development build)", Version)
|
||||
}
|
||||
if Channel == "nightly" {
|
||||
return fmt.Sprintf("BugSETI %s (nightly)", Version)
|
||||
}
|
||||
if Channel == "beta" {
|
||||
return fmt.Sprintf("BugSETI v%s (beta)", Version)
|
||||
}
|
||||
return fmt.Sprintf("BugSETI v%s", Version)
|
||||
}
|
||||
|
||||
// GetShortCommit returns the first 7 characters of the commit hash.
|
||||
func GetShortCommit() string {
|
||||
if len(Commit) >= 7 {
|
||||
return Commit[:7]
|
||||
}
|
||||
return Commit
|
||||
}
|
||||
|
||||
// IsDevelopment returns true if this is a development build.
|
||||
func IsDevelopment() bool {
|
||||
return Channel == "dev" || Version == "dev"
|
||||
}
|
||||
|
||||
// IsPrerelease returns true if this is a prerelease build (beta or nightly).
|
||||
func IsPrerelease() bool {
|
||||
return Channel == "beta" || Channel == "nightly"
|
||||
}
|
||||
|
||||
// VersionService provides version information to the frontend via Wails.
|
||||
type VersionService struct{}
|
||||
|
||||
// NewVersionService creates a new VersionService.
|
||||
func NewVersionService() *VersionService {
|
||||
return &VersionService{}
|
||||
}
|
||||
|
||||
// ServiceName returns the service name for Wails.
|
||||
func (v *VersionService) ServiceName() string {
|
||||
return "VersionService"
|
||||
}
|
||||
|
||||
// GetVersion returns the version string.
|
||||
func (v *VersionService) GetVersion() string {
|
||||
return GetVersion()
|
||||
}
|
||||
|
||||
// GetChannel returns the release channel.
|
||||
func (v *VersionService) GetChannel() string {
|
||||
return GetChannel()
|
||||
}
|
||||
|
||||
// GetVersionInfo returns complete version information.
|
||||
func (v *VersionService) GetVersionInfo() VersionInfo {
|
||||
return GetVersionInfo()
|
||||
}
|
||||
|
||||
// GetVersionString returns a formatted version string.
|
||||
func (v *VersionService) GetVersionString() string {
|
||||
return GetVersionString()
|
||||
}
|
||||
|
|
@ -1,525 +0,0 @@
|
|||
// Command i18n-validate scans Go source files for i18n key usage and validates
|
||||
// them against the locale JSON files.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run ./cmd/i18n-validate ./...
|
||||
// go run ./cmd/i18n-validate ./pkg/cli ./cmd/dev
|
||||
//
|
||||
// The validator checks:
|
||||
// - T("key") calls - validates key exists in locale files
|
||||
// - C("intent", ...) calls - validates intent exists in registered intents
|
||||
// - i18n.T("key") and i18n.C("intent", ...) qualified calls
|
||||
//
|
||||
// Exit codes:
|
||||
// - 0: All keys valid
|
||||
// - 1: Missing keys found
|
||||
// - 2: Error during validation
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/parser"
|
||||
"go/token"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// KeyUsage records where a key is used in the source code.
|
||||
type KeyUsage struct {
|
||||
Key string
|
||||
File string
|
||||
Line int
|
||||
Function string // "T" or "C"
|
||||
}
|
||||
|
||||
// ValidationResult holds the results of validation.
|
||||
type ValidationResult struct {
|
||||
TotalKeys int
|
||||
ValidKeys int
|
||||
MissingKeys []KeyUsage
|
||||
IntentKeys int
|
||||
MessageKeys int
|
||||
}
|
||||
|
||||
func main() {
|
||||
if len(os.Args) < 2 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: i18n-validate <packages...>")
|
||||
fmt.Fprintln(os.Stderr, "Example: i18n-validate ./...")
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
// Find the project root (where locales are)
|
||||
root, err := findProjectRoot()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error finding project root: %v\n", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
// Load valid keys from locale files
|
||||
validKeys, err := loadValidKeys(filepath.Join(root, "pkg/i18n/locales"))
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error loading locale files: %v\n", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
// Load valid intents
|
||||
validIntents := loadValidIntents()
|
||||
|
||||
// Scan source files
|
||||
usages, err := scanPackages(os.Args[1:])
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error scanning packages: %v\n", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
// Validate
|
||||
result := validate(usages, validKeys, validIntents)
|
||||
|
||||
// Report
|
||||
printReport(result)
|
||||
|
||||
if len(result.MissingKeys) > 0 {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// findProjectRoot finds the project root by looking for go.mod.
|
||||
func findProjectRoot() (string, error) {
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for {
|
||||
if _, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil {
|
||||
return dir, nil
|
||||
}
|
||||
parent := filepath.Dir(dir)
|
||||
if parent == dir {
|
||||
return "", fmt.Errorf("could not find go.mod in any parent directory")
|
||||
}
|
||||
dir = parent
|
||||
}
|
||||
}
|
||||
|
||||
// loadValidKeys loads all valid keys from locale JSON files.
|
||||
func loadValidKeys(localesDir string) (map[string]bool, error) {
|
||||
keys := make(map[string]bool)
|
||||
|
||||
entries, err := os.ReadDir(localesDir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("reading locales dir: %w", err)
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".json") {
|
||||
continue
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(filepath.Join(localesDir, entry.Name()))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("reading %s: %w", entry.Name(), err)
|
||||
}
|
||||
|
||||
var raw map[string]any
|
||||
if err := json.Unmarshal(data, &raw); err != nil {
|
||||
return nil, fmt.Errorf("parsing %s: %w", entry.Name(), err)
|
||||
}
|
||||
|
||||
extractKeys("", raw, keys)
|
||||
}
|
||||
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
// extractKeys recursively extracts flattened keys from nested JSON.
|
||||
func extractKeys(prefix string, data map[string]any, out map[string]bool) {
|
||||
for key, value := range data {
|
||||
fullKey := key
|
||||
if prefix != "" {
|
||||
fullKey = prefix + "." + key
|
||||
}
|
||||
|
||||
switch v := value.(type) {
|
||||
case string:
|
||||
out[fullKey] = true
|
||||
case map[string]any:
|
||||
// Check if it's a plural/verb/noun object (has specific keys)
|
||||
if isPluralOrGrammarObject(v) {
|
||||
out[fullKey] = true
|
||||
} else {
|
||||
extractKeys(fullKey, v, out)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// isPluralOrGrammarObject checks if a map is a leaf object (plural forms, verb forms, etc).
|
||||
func isPluralOrGrammarObject(m map[string]any) bool {
|
||||
// CLDR plural keys
|
||||
_, hasOne := m["one"]
|
||||
_, hasOther := m["other"]
|
||||
_, hasZero := m["zero"]
|
||||
_, hasTwo := m["two"]
|
||||
_, hasFew := m["few"]
|
||||
_, hasMany := m["many"]
|
||||
|
||||
// Grammar keys
|
||||
_, hasPast := m["past"]
|
||||
_, hasGerund := m["gerund"]
|
||||
_, hasGender := m["gender"]
|
||||
_, hasBase := m["base"]
|
||||
|
||||
// Article keys
|
||||
_, hasDefault := m["default"]
|
||||
_, hasVowel := m["vowel"]
|
||||
|
||||
if hasOne || hasOther || hasZero || hasTwo || hasFew || hasMany {
|
||||
return true
|
||||
}
|
||||
if hasPast || hasGerund || hasGender || hasBase {
|
||||
return true
|
||||
}
|
||||
if hasDefault || hasVowel {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// loadValidIntents returns the set of valid intent keys.
|
||||
func loadValidIntents() map[string]bool {
|
||||
// Core intents - these match what's defined in intents.go
|
||||
return map[string]bool{
|
||||
// Destructive
|
||||
"core.delete": true,
|
||||
"core.remove": true,
|
||||
"core.discard": true,
|
||||
"core.reset": true,
|
||||
"core.overwrite": true,
|
||||
// Creation
|
||||
"core.create": true,
|
||||
"core.add": true,
|
||||
"core.clone": true,
|
||||
"core.copy": true,
|
||||
// Modification
|
||||
"core.save": true,
|
||||
"core.update": true,
|
||||
"core.rename": true,
|
||||
"core.move": true,
|
||||
// Git
|
||||
"core.commit": true,
|
||||
"core.push": true,
|
||||
"core.pull": true,
|
||||
"core.merge": true,
|
||||
"core.rebase": true,
|
||||
// Network
|
||||
"core.install": true,
|
||||
"core.download": true,
|
||||
"core.upload": true,
|
||||
"core.publish": true,
|
||||
"core.deploy": true,
|
||||
// Process
|
||||
"core.start": true,
|
||||
"core.stop": true,
|
||||
"core.restart": true,
|
||||
"core.run": true,
|
||||
"core.build": true,
|
||||
"core.test": true,
|
||||
// Information
|
||||
"core.continue": true,
|
||||
"core.proceed": true,
|
||||
"core.confirm": true,
|
||||
// Additional
|
||||
"core.sync": true,
|
||||
"core.boot": true,
|
||||
"core.format": true,
|
||||
"core.analyse": true,
|
||||
"core.link": true,
|
||||
"core.unlink": true,
|
||||
"core.fetch": true,
|
||||
"core.generate": true,
|
||||
"core.validate": true,
|
||||
"core.check": true,
|
||||
"core.scan": true,
|
||||
}
|
||||
}
|
||||
|
||||
// scanPackages scans Go packages for i18n key usage.
|
||||
func scanPackages(patterns []string) ([]KeyUsage, error) {
|
||||
var usages []KeyUsage
|
||||
|
||||
for _, pattern := range patterns {
|
||||
// Expand pattern
|
||||
matches, err := expandPattern(pattern)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("expanding pattern %q: %w", pattern, err)
|
||||
}
|
||||
|
||||
for _, dir := range matches {
|
||||
dirUsages, err := scanDirectory(dir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning %s: %w", dir, err)
|
||||
}
|
||||
usages = append(usages, dirUsages...)
|
||||
}
|
||||
}
|
||||
|
||||
return usages, nil
|
||||
}
|
||||
|
||||
// expandPattern expands a Go package pattern to directories.
|
||||
func expandPattern(pattern string) ([]string, error) {
|
||||
// Handle ./... or ... pattern
|
||||
if strings.HasSuffix(pattern, "...") {
|
||||
base := strings.TrimSuffix(pattern, "...")
|
||||
base = strings.TrimSuffix(base, "/")
|
||||
if base == "" || base == "." {
|
||||
base = "."
|
||||
}
|
||||
return findAllGoDirs(base)
|
||||
}
|
||||
|
||||
// Single directory
|
||||
return []string{pattern}, nil
|
||||
}
|
||||
|
||||
// findAllGoDirs finds all directories containing .go files.
|
||||
func findAllGoDirs(root string) ([]string, error) {
|
||||
var dirs []string
|
||||
seen := make(map[string]bool)
|
||||
|
||||
err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return nil // Continue walking even on error
|
||||
}
|
||||
|
||||
if info == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Skip vendor, testdata, and hidden directories (but not . itself)
|
||||
if info.IsDir() {
|
||||
name := info.Name()
|
||||
if name == "vendor" || name == "testdata" || (strings.HasPrefix(name, ".") && name != ".") {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check for .go files
|
||||
if strings.HasSuffix(path, ".go") {
|
||||
dir := filepath.Dir(path)
|
||||
if !seen[dir] {
|
||||
seen[dir] = true
|
||||
dirs = append(dirs, dir)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
return dirs, err
|
||||
}
|
||||
|
||||
// scanDirectory scans a directory for i18n key usage.
|
||||
func scanDirectory(dir string) ([]KeyUsage, error) {
|
||||
var usages []KeyUsage
|
||||
|
||||
fset := token.NewFileSet()
|
||||
// Parse all .go files except those ending exactly in _test.go
|
||||
pkgs, err := parser.ParseDir(fset, dir, func(fi os.FileInfo) bool {
|
||||
name := fi.Name()
|
||||
// Only exclude files that are actual test files (ending in _test.go)
|
||||
// Files like "go_test_cmd.go" should be included
|
||||
return strings.HasSuffix(name, ".go") && !strings.HasSuffix(name, "_test.go")
|
||||
}, 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, pkg := range pkgs {
|
||||
for filename, file := range pkg.Files {
|
||||
fileUsages := scanFile(fset, filename, file)
|
||||
usages = append(usages, fileUsages...)
|
||||
}
|
||||
}
|
||||
|
||||
return usages, nil
|
||||
}
|
||||
|
||||
// scanFile scans a single file for i18n key usage.
|
||||
func scanFile(fset *token.FileSet, filename string, file *ast.File) []KeyUsage {
|
||||
var usages []KeyUsage
|
||||
|
||||
ast.Inspect(file, func(n ast.Node) bool {
|
||||
call, ok := n.(*ast.CallExpr)
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
funcName := getFuncName(call)
|
||||
if funcName == "" {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for T(), C(), i18n.T(), i18n.C()
|
||||
switch funcName {
|
||||
case "T", "i18n.T", "_", "i18n._":
|
||||
if key := extractStringArg(call, 0); key != "" {
|
||||
pos := fset.Position(call.Pos())
|
||||
usages = append(usages, KeyUsage{
|
||||
Key: key,
|
||||
File: filename,
|
||||
Line: pos.Line,
|
||||
Function: "T",
|
||||
})
|
||||
}
|
||||
case "C", "i18n.C":
|
||||
if key := extractStringArg(call, 0); key != "" {
|
||||
pos := fset.Position(call.Pos())
|
||||
usages = append(usages, KeyUsage{
|
||||
Key: key,
|
||||
File: filename,
|
||||
Line: pos.Line,
|
||||
Function: "C",
|
||||
})
|
||||
}
|
||||
case "I", "i18n.I":
|
||||
if key := extractStringArg(call, 0); key != "" {
|
||||
pos := fset.Position(call.Pos())
|
||||
usages = append(usages, KeyUsage{
|
||||
Key: key,
|
||||
File: filename,
|
||||
Line: pos.Line,
|
||||
Function: "C", // I() is an intent builder
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return usages
|
||||
}
|
||||
|
||||
// getFuncName extracts the function name from a call expression.
|
||||
func getFuncName(call *ast.CallExpr) string {
|
||||
switch fn := call.Fun.(type) {
|
||||
case *ast.Ident:
|
||||
return fn.Name
|
||||
case *ast.SelectorExpr:
|
||||
if ident, ok := fn.X.(*ast.Ident); ok {
|
||||
return ident.Name + "." + fn.Sel.Name
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// extractStringArg extracts a string literal from a call argument.
|
||||
func extractStringArg(call *ast.CallExpr, index int) string {
|
||||
if index >= len(call.Args) {
|
||||
return ""
|
||||
}
|
||||
|
||||
arg := call.Args[index]
|
||||
|
||||
// Direct string literal
|
||||
if lit, ok := arg.(*ast.BasicLit); ok && lit.Kind == token.STRING {
|
||||
// Remove quotes
|
||||
s := lit.Value
|
||||
if len(s) >= 2 {
|
||||
return s[1 : len(s)-1]
|
||||
}
|
||||
}
|
||||
|
||||
// Identifier (constant reference) - we skip these as they're type-safe
|
||||
if _, ok := arg.(*ast.Ident); ok {
|
||||
return "" // Skip constants like IntentCoreDelete
|
||||
}
|
||||
|
||||
// Selector (like i18n.IntentCoreDelete) - skip these too
|
||||
if _, ok := arg.(*ast.SelectorExpr); ok {
|
||||
return ""
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// validate validates key usages against valid keys and intents.
|
||||
func validate(usages []KeyUsage, validKeys, validIntents map[string]bool) ValidationResult {
|
||||
result := ValidationResult{
|
||||
TotalKeys: len(usages),
|
||||
}
|
||||
|
||||
for _, usage := range usages {
|
||||
if usage.Function == "C" {
|
||||
result.IntentKeys++
|
||||
// Check intent keys
|
||||
if validIntents[usage.Key] {
|
||||
result.ValidKeys++
|
||||
} else {
|
||||
// Also allow custom intents (non-core.* prefix)
|
||||
if !strings.HasPrefix(usage.Key, "core.") {
|
||||
result.ValidKeys++ // Assume custom intents are valid
|
||||
} else {
|
||||
result.MissingKeys = append(result.MissingKeys, usage)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
result.MessageKeys++
|
||||
// Check message keys
|
||||
if validKeys[usage.Key] {
|
||||
result.ValidKeys++
|
||||
} else if strings.HasPrefix(usage.Key, "core.") {
|
||||
// core.* keys used with T() are intent keys
|
||||
if validIntents[usage.Key] {
|
||||
result.ValidKeys++
|
||||
} else {
|
||||
result.MissingKeys = append(result.MissingKeys, usage)
|
||||
}
|
||||
} else {
|
||||
result.MissingKeys = append(result.MissingKeys, usage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// printReport prints the validation report.
|
||||
func printReport(result ValidationResult) {
|
||||
fmt.Printf("i18n Validation Report\n")
|
||||
fmt.Printf("======================\n\n")
|
||||
fmt.Printf("Total keys scanned: %d\n", result.TotalKeys)
|
||||
fmt.Printf(" Message keys (T): %d\n", result.MessageKeys)
|
||||
fmt.Printf(" Intent keys (C): %d\n", result.IntentKeys)
|
||||
fmt.Printf("Valid keys: %d\n", result.ValidKeys)
|
||||
fmt.Printf("Missing keys: %d\n", len(result.MissingKeys))
|
||||
|
||||
if len(result.MissingKeys) > 0 {
|
||||
fmt.Printf("\nMissing Keys:\n")
|
||||
fmt.Printf("-------------\n")
|
||||
|
||||
// Sort by file then line
|
||||
sort.Slice(result.MissingKeys, func(i, j int) bool {
|
||||
if result.MissingKeys[i].File != result.MissingKeys[j].File {
|
||||
return result.MissingKeys[i].File < result.MissingKeys[j].File
|
||||
}
|
||||
return result.MissingKeys[i].Line < result.MissingKeys[j].Line
|
||||
})
|
||||
|
||||
for _, usage := range result.MissingKeys {
|
||||
fmt.Printf(" %s:%d: %s(%q)\n", usage.File, usage.Line, usage.Function, usage.Key)
|
||||
}
|
||||
|
||||
fmt.Printf("\nAdd these keys to pkg/i18n/locales/en_GB.json or use constants from pkg/i18n/keys.go\n")
|
||||
} else {
|
||||
fmt.Printf("\nAll keys are valid!\n")
|
||||
}
|
||||
}
|
||||
Loading…
Add table
Reference in a new issue