2
0

Compare commits

...

33 Commits

Author SHA1 Message Date
63967eb6fa style(ui): add package docs and mark unused parameters
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Release / build (amd64, darwin) (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / release (push) Has been cancelled
Adds package-level documentation comments across cmd and internal packages. Marks unused function parameters with underscore prefix to satisfy linter requirements. Replaces if-else chains with switch statements for better readability. Explicitly ignores os.Setenv return value where error handling is not needed.
2026-01-19 01:16:47 -05:00
22f1ea6e76 chore(ui): update golangci-lint config and cleanup package docs
Updates golangci-lint configuration to v2 format with Go 1.23, streamlines linter settings by removing deprecated options and unnecessary exclusions. Adds package documentation and renames CleanupResult to Result for consistency. Marks unused context parameter with underscore.
2026-01-19 01:03:07 -05:00
GitCaddy Bot
4d6900b7a3 Update README download URLs to v1.0.0
Some checks failed
Release / release (push) Has been cancelled
Release / build (amd64, darwin) (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
CI / build-and-test (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 10:52:04 -05:00
GitCaddy Bot
898ef596ae Fix release workflow to use gitcaddy-runner naming
Some checks failed
CI / build-and-test (push) Successful in 1m15s
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Release / release (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / build (amd64, darwin) (push) Has been cancelled
- Update ldflags to use git.marketally.com/gitcaddy/gitcaddy-runner path
- Rename output binaries from act_runner to gitcaddy-runner
- Update artifact names to match new naming convention

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 10:49:46 -05:00
GitCaddy Bot
eb37073861 Fix project name in goreleaser config
All checks were successful
CI / build-and-test (push) Successful in 53s
Release / build (amd64, linux) (push) Successful in 1m7s
Release / build (amd64, darwin) (push) Successful in 1m8s
Release / build (amd64, windows) (push) Successful in 1m17s
Release / build (arm64, darwin) (push) Successful in 48s
Release / build (arm64, linux) (push) Successful in 54s
Release / release (push) Successful in 17s
- Add project_name: gitcaddy-runner so binaries are named correctly
- Update gitea_urls to point to git.marketally.com instead of gitea.com

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 10:41:56 -05:00
GitCaddy Bot
ec9b323318 Rebrand from act_runner to gitcaddy-runner v1.0.0
All checks were successful
Release / build (amd64, linux) (push) Successful in 1m15s
CI / build-and-test (push) Successful in 1m7s
Release / build (amd64, windows) (push) Successful in 1m3s
Release / build (amd64, darwin) (push) Successful in 1m8s
Release / build (arm64, darwin) (push) Successful in 46s
Release / build (arm64, linux) (push) Successful in 50s
Release / release (push) Successful in 26s
- Update module path: gitea.com/gitea/act_runner → git.marketally.com/gitcaddy/gitcaddy-runner
- Update all import paths across Go source files
- Update Makefile LDFLAGS and package references
- Update .goreleaser.yaml ldflags and S3 directory path
- Add comprehensive README with capacity configuration documentation
- Document troubleshooting for capacity feature and Docker detection
- Bump version to v1.0.0 for major rebrand
- All linting checks passed (fmt-check, go mod tidy, go vet)
2026-01-16 10:31:58 -05:00
GitCaddy
d955727863 Fix formatting (gofmt, remove BOM)
All checks were successful
CI / build-and-test (push) Successful in 1m13s
Release / build (amd64, darwin) (push) Successful in 57s
Release / build (amd64, linux) (push) Successful in 55s
Release / build (amd64, windows) (push) Successful in 54s
Release / build (arm64, darwin) (push) Successful in 53s
Release / build (arm64, linux) (push) Successful in 52s
Release / release (push) Successful in 19s
2026-01-15 13:09:06 +00:00
GitCaddy
3addd66efa Report runner capacity in capabilities JSON
Some checks failed
CI / build-and-test (push) Failing after 20s
2026-01-15 13:06:30 +00:00
GitCaddy
b6d700af60 fix: Use PowerShell instead of deprecated wmic for Windows CPU detection
Some checks failed
CI / build-and-test (push) Failing after 37s
Release / build (amd64, linux) (push) Successful in 1m6s
Release / build (amd64, darwin) (push) Successful in 1m22s
Release / build (amd64, windows) (push) Successful in 49s
Release / build (arm64, darwin) (push) Successful in 1m1s
Release / build (arm64, linux) (push) Successful in 49s
Release / release (push) Successful in 18s
wmic is deprecated in newer Windows versions and returns empty results.
Use Get-CimInstance Win32_Processor via PowerShell instead.
2026-01-14 18:00:21 +00:00
GitCaddy
7c0d11c353 chore: Reduce go-build cache retention to 3 days
Some checks failed
CI / build-and-test (push) Failing after 33s
2026-01-14 12:19:38 +00:00
GitCaddy
b9ae4d5f36 feat: Add auto-cleanup and fix container CPU detection
Some checks failed
CI / build-and-test (push) Failing after 37s
- Add automatic disk cleanup when usage exceeds 85%
- Fix false CPU readings in LXC containers (was showing host load)
- Add cross-platform cache cleanup (Linux, macOS, Windows)
- Extend temp file patterns for go-build, node-compile-cache, etc.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 12:12:34 +00:00
GitCaddy
3a66563c1e chore: Fix gofmt formatting in runner.go
All checks were successful
CI / build-and-test (push) Successful in 1m1s
Release / build (amd64, darwin) (push) Successful in 50s
Release / build (amd64, linux) (push) Successful in 1m0s
Release / build (amd64, windows) (push) Successful in 1m7s
Release / build (arm64, darwin) (push) Successful in 1m27s
Release / build (arm64, linux) (push) Successful in 1m2s
Release / release (push) Successful in 54s
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:57:48 +00:00
GitCaddy
e0feb6bd4e chore: Remove gitea-vet from build process
Some checks failed
CI / build-and-test (push) Failing after 30s
Release / build (amd64, darwin) (push) Failing after 1m20s
Release / build (arm64, darwin) (push) Failing after 1m32s
Release / build (amd64, windows) (push) Failing after 1m40s
Release / build (arm64, linux) (push) Successful in 1m30s
Release / release (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Use standard go vet instead of gitea-vet for copyright checks.
This allows MarketAlly copyright headers in new files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:53:17 +00:00
GitCaddy
0db86bc6a4 chore: Fix linter issues and update copyrights
Some checks failed
CI / build-and-test (push) Failing after 55s
Release / build (amd64, darwin) (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Release / release (push) Has been cancelled
- Format Go files with gofmt
- Update copyrights to include MarketAlly
- Add MarketAlly copyright to files we created

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-14 09:41:16 +00:00
GitCaddy
f5b22c4149 feat: Add build cache cleanup and CLI cleanup command
Some checks failed
CI / build-and-test (push) Failing after 30s
- Add cleanup for common build tool caches (Go, npm, NuGet, Gradle, Maven, pip, Cargo)
- Build caches cleaned for files older than 7 days
- Add gitcaddy-runner cleanup CLI command for manual cleanup trigger
- Fixes disk space issues from accumulated CI build artifacts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-14 09:26:21 +00:00
GitCaddy
0ba2e0c3d5 feat: Add CPU load monitoring and cleanup support
Some checks failed
CI / build-and-test (push) Failing after 55s
- Add CPUInfo struct with load average and percentage
- Add detectCPULoad() for Linux, macOS, and Windows
- Add cleanup package for disk space management
- Handle RequestCleanup signal from server
- Report CPU load in capabilities to server

🤖 Generated with Claude Code
2026-01-14 08:48:54 +00:00
GitCaddy
8a54ec62d4 fix: Use linux-latest instead of ubuntu-latest
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Release / release (push) Has been cancelled
2026-01-14 07:39:18 +00:00
GitCaddy
587ac42be4 feat: Rebrand to gitcaddy-runner with upload helper
Some checks failed
Release / build (amd64, linux) (push) Successful in 1m12s
Release / build (amd64, darwin) (push) Successful in 1m16s
Release / build (arm64, darwin) (push) Successful in 1m0s
Release / build (amd64, windows) (push) Successful in 1m13s
Release / build (arm64, linux) (push) Successful in 45s
Release / release (push) Successful in 50s
CI / build-and-test (push) Has been cancelled
- Rename binary from act_runner to gitcaddy-runner
- Update all user-facing strings (Gitea → GitCaddy)
- Add gitcaddy-upload helper with automatic retry for large files
- Add upload helper package (internal/pkg/artifact)
- Update Docker image name to marketally/gitcaddy-runner

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-14 07:26:46 +00:00
GitCaddy
56dcda0d5e fix: remove binaries from git tracking
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Successful in 1m22s
Release / build (arm64, darwin) (push) Successful in 2m9s
Release / build (amd64, linux) (push) Successful in 2m19s
Release / build (amd64, windows) (push) Successful in 2m22s
Release / build (arm64, linux) (push) Successful in 1m9s
Release / release (push) Successful in 21s
2026-01-12 01:36:19 +00:00
GitCaddy
e44f0c403b fix: remove accidentally committed binaries and add to gitignore
Some checks failed
CI / build-and-test (push) Has been cancelled
2026-01-12 01:35:38 +00:00
GitCaddy
fb1498bf7a fix: add -a flag to force rebuild and prevent cached binaries
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Successful in 1m14s
Release / build (amd64, windows) (push) Successful in 1m38s
Release / build (amd64, linux) (push) Successful in 2m52s
Release / build (arm64, darwin) (push) Successful in 2m50s
Release / build (arm64, linux) (push) Successful in 1m48s
Release / release (push) Successful in 47s
2026-01-12 01:28:20 +00:00
GitCaddy
fa69213d15 fix: use GitHub Actions expression syntax for VERSION
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, linux) (push) Successful in 45s
Release / build (amd64, windows) (push) Successful in 54s
Release / build (amd64, darwin) (push) Successful in 1m8s
Release / build (arm64, darwin) (push) Successful in 1m6s
Release / build (arm64, linux) (push) Successful in 42s
Release / release (push) Successful in 25s
2026-01-12 01:22:54 +00:00
GitCaddy
f92e50f35b fix: use GITHUB_REF instead of GITHUB_REF_NAME for version extraction
Some checks failed
Release / build (amd64, darwin) (push) Successful in 47s
Release / build (amd64, linux) (push) Successful in 55s
Release / build (arm64, darwin) (push) Successful in 1m7s
Release / build (amd64, windows) (push) Successful in 1m10s
Release / build (arm64, linux) (push) Successful in 51s
Release / release (push) Successful in 26s
CI / build-and-test (push) Has been cancelled
2026-01-12 01:14:20 +00:00
GitCaddy
a792b47b41 fix: isolate golangci-lint cache per job to prevent parallel conflicts
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Successful in 56s
Release / build (amd64, linux) (push) Successful in 1m0s
Release / build (amd64, windows) (push) Successful in 1m9s
Release / build (arm64, linux) (push) Successful in 38s
Release / build (arm64, darwin) (push) Successful in 55s
Release / release (push) Successful in 27s
Add GOLANGCI_LINT_CACHE and XDG_CACHE_HOME environment variables
pointing to job-specific cache directory to prevent parallel job
conflicts when running golangci-lint.

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 00:47:11 +00:00
GitCaddy
68ec7efde0 fix: isolate golangci-lint cache per job to prevent parallel conflicts
Add GOLANGCI_LINT_CACHE and XDG_CACHE_HOME environment variables
pointing to job-specific cache directory to prevent parallel job
conflicts when running golangci-lint.

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 00:45:26 +00:00
GitCaddy
f314ffb036 feat: implement job-isolated cache directories
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Successful in 17s
Release / build (amd64, linux) (push) Successful in 21s
Release / build (amd64, windows) (push) Successful in 17s
Release / build (arm64, darwin) (push) Successful in 16s
Release / build (arm64, linux) (push) Successful in 43s
Release / release (push) Successful in 29s
- Each job now gets its own cache directory: ~/.cache/act/jobs/{taskId}/
- Cache is cleaned up automatically after job completion
- Periodic cleanup removes stale job caches older than 2 hours
- Eliminates race conditions in npm/pnpm cache operations
- No more ENOTEMPTY errors from concurrent tool installs
- Fix workflow: use linux-latest and setup-go@v4
2026-01-11 22:25:24 +00:00
GitCaddy
b303a83a77 feat(capabilities): add visionOS SDK, PowerShell versions, working directory disk space
Some checks failed
CI / build-and-test (push) Has been cancelled
Release / build (amd64, darwin) (push) Has been cancelled
Release / build (amd64, linux) (push) Has been cancelled
Release / build (amd64, windows) (push) Has been cancelled
Release / build (arm64, darwin) (push) Has been cancelled
Release / build (arm64, linux) (push) Has been cancelled
Release / release (push) Has been cancelled
- Add visionOS/xrOS SDK detection for Vision Pro development
- Add PowerShell version detection (pwsh and powershell) with actual versions
- Detect disk space on working directory filesystem (not just root)
  - Useful for runners using external/USB drives for builds
- Add watchOS and tvOS suggested labels
- Refactor disk detection to accept path parameter

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 20:32:23 +00:00
GitCaddy
66d0b1e608 feat(capabilities): enhanced tool and platform detection
Some checks failed
CI / build-and-test (push) Has been cancelled
macOS:
- Xcode version and build detection
- iOS/watchOS/tvOS SDK detection
- iOS Simulator detection
- Swift, CocoaPods, Carthage, fastlane detection
- Code signing tools (codesign, notarytool)
- Package builders (pkgbuild, create-dmg)

Windows:
- Visual Studio detection via vswhere
- MSBuild detection
- Inno Setup (ISCC) detection
- NSIS (makensis) detection
- WiX Toolset detection
- Windows SDK signtool detection
- Package managers (Chocolatey, Scoop, winget)

Linux:
- GCC/Clang compiler detection
- Build tools (autoconf, automake, meson)
- Package builders (dpkg-deb, rpmbuild, fpm)
- AppImage tools detection

Cross-platform:
- Ruby, PHP, Swift, Kotlin, Flutter, Dart
- CMake, Make, Ninja, Gradle, Maven
- npm, yarn, pnpm, cargo, pip
- Git version detection

Suggested labels now include:
- xcode, ios, ios-simulator for macOS with Xcode
- inno-setup, nsis, msbuild, vs2022 for Windows
- Tool-based labels (dotnet, java, node)

🤖 Generated with Claude Code
2026-01-11 20:20:02 +00:00
GitCaddy
48a589eb79 fix: add cross-platform disk detection for Windows/macOS builds
Some checks failed
CI / build-and-test (push) Failing after 2s
Release / build (amd64, darwin) (push) Successful in 6s
Release / build (amd64, linux) (push) Successful in 5s
Release / build (amd64, windows) (push) Successful in 6s
Release / build (arm64, darwin) (push) Successful in 5s
Release / build (arm64, linux) (push) Successful in 5s
Release / release (push) Successful in 11s
- Split detectDiskSpace() into platform-specific files with build tags
- disk_unix.go: Uses unix.Statfs for Linux and macOS
- disk_windows.go: Uses windows.GetDiskFreeSpaceEx for Windows
- Fixes Windows cross-compilation build errors

🤖 Generated with Claude Code
2026-01-11 19:29:27 +00:00
GitCaddy
fef300dd5b docs: add HOWTOSTART.md guide for setting up runners
All checks were successful
CI / build-and-test (push) Successful in 17s
Comprehensive guide covering:
- Prerequisites and quick start
- Registration process
- Labels configuration
- Running as a systemd service
- Docker support
- Capabilities detection
- Troubleshooting tips

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 18:18:39 +00:00
GitCaddy
49a0b6f167 feat(capabilities): Add Linux distro detection and suggested labels
All checks were successful
CI / build-and-test (push) Successful in 14s
- Add DistroInfo struct to detect Linux distribution from /etc/os-release
- Add detectLinuxDistro() function to parse distro ID, version, pretty name
- Add generateSuggestedLabels() to create industry-standard labels
- Suggested labels include: linux/windows/macos, distro name, with -latest suffix

🤖 Generated with Claude Code
2026-01-11 17:25:45 +00:00
GitCaddy
e5fdaadbd2 feat: handle bandwidth test requests from server
All checks were successful
CI / build-and-test (push) Successful in 8s
- Update to actions-proto-go v0.5.7 with RequestBandwidthTest field
- Add SetBandwidthManager method to Poller
- Check FetchTaskResponse for bandwidth test request
- Include bandwidth in capabilities sent to server

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-11 15:29:21 +00:00
GitCaddy
ab382dc256 feat: add bandwidth testing to runner capabilities
All checks were successful
CI / build-and-test (push) Successful in 8s
- Add BandwidthManager for periodic bandwidth tests (hourly)
- Test download speed and latency against registered Gitea server
- Include bandwidth in runner capabilities JSON
- Add FormatBandwidth helper for display

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-11 07:38:49 +00:00
39 changed files with 2587 additions and 417 deletions

View File

@@ -7,7 +7,7 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: linux-latest
strategy:
matrix:
include:
@@ -26,33 +26,36 @@ jobs:
with:
fetch-depth: 0
- uses: actions/setup-go@v5
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: Build
env:
GOPRIVATE: git.marketally.com
VERSION: ${{ github.ref_name }}
run: |
VERSION=${GITHUB_REF_NAME#v}
# Strip the v prefix from tag
VERSION="${VERSION#v}"
EXT=""
if [ "${{ matrix.goos }}" = "windows" ]; then
EXT=".exe"
fi
echo "Building version: ${VERSION}"
CGO_ENABLED=0 GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} \
go build -ldflags "-X gitea.com/gitea/act_runner/internal/pkg/ver.version=${VERSION}" \
-o act_runner-${{ matrix.goos }}-${{ matrix.goarch }}${EXT}
env:
GOPRIVATE: git.marketally.com
go build -a -ldflags "-X git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver.version=${VERSION}" \
-o gitcaddy-runner-${VERSION}-${{ matrix.goos }}-${{ matrix.goarch }}${EXT}
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: act_runner-${{ matrix.goos }}-${{ matrix.goarch }}
path: act_runner-*
name: gitcaddy-runner-${{ matrix.goos }}-${{ matrix.goarch }}
path: gitcaddy-runner-*
release:
needs: build
runs-on: ubuntu-latest
runs-on: linux-latest
steps:
- uses: actions/checkout@v4
@@ -64,7 +67,7 @@ jobs:
- name: Prepare release files
run: |
mkdir -p release
find artifacts -type f -name 'act_runner-*' -exec mv {} release/ \;
find artifacts -type f -name 'gitcaddy-runner-*' -exec mv {} release/ \;
cd release && sha256sum * > checksums.txt
- name: Create Release

View File

@@ -8,7 +8,7 @@ on:
jobs:
build-and-test:
runs-on: ubuntu-latest
runs-on: linux-latest
steps:
- uses: actions/checkout@v4

1
.gitignore vendored
View File

@@ -12,3 +12,4 @@ coverage.txt
__debug_bin
# gorelease binary folder
dist
act_runner-*

View File

@@ -1,53 +1,42 @@
version: "2"
linters:
default: none
enable:
- gosimple
- typecheck
- govet
- errcheck
- staticcheck
- unused
- dupl
#- gocyclo # The cyclomatic complexety of a lot of functions is too high, we should refactor those another time.
- gofmt
- misspell
- gocritic
- bidichk
- ineffassign
- revive
- gofumpt
- depguard
- nakedret
- unconvert
- wastedassign
- nolintlint
- stylecheck
enable-all: false
disable-all: true
fast: false
formatters:
enable:
- gofmt
- gofumpt
run:
go: 1.18
go: "1.23"
timeout: 10m
skip-dirs:
- node_modules
- public
- web_src
linters-settings:
stylecheck:
checks: ["all", "-ST1005", "-ST1003"]
nakedret:
max-func-lines: 0
gocritic:
disabled-checks:
- ifElseChain
- singleCaseSwitch # Every time this occurred in the code, there was no other way.
- singleCaseSwitch
revive:
ignore-generated-header: false
severity: warning
confidence: 0.8
errorCode: 1
warningCode: 1
rules:
- name: blank-imports
- name: context-as-argument
@@ -72,94 +61,25 @@ linters-settings:
- name: modifies-value-receiver
gofumpt:
extra-rules: true
lang-version: "1.18"
depguard:
# TODO: use depguard to replace import checks in gitea-vet
list-type: denylist
# Check the list against standard lib.
include-go-root: true
packages-with-error-message:
- github.com/unknwon/com: "use gitea's util and replacements"
issues:
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- gocyclo
- errcheck
- dupl
- gosec
- unparam
- staticcheck
- path: models/migrations/v
linters:
- gocyclo
- errcheck
- dupl
- gosec
- linters:
- dupl
text: "webhook"
- linters:
- gocritic
text: "`ID' should not be capitalized"
- path: modules/templates/helper.go
linters:
- gocritic
- linters:
- unused
text: "swagger"
- path: contrib/pr/checkout.go
linters:
- errcheck
- path: models/issue.go
linters:
- errcheck
- path: models/migrations/
linters:
- errcheck
- path: modules/log/
linters:
- errcheck
- path: routers/api/v1/repo/issue_subscription.go
linters:
- dupl
- path: routers/repo/view.go
linters:
- dupl
- path: models/migrations/
linters:
- unused
- linters:
- staticcheck
text: "argument x is overwritten before first use"
- path: modules/httplib/httplib.go
linters:
- staticcheck
# Enabling this would require refactoring the methods and how they are called.
- path: models/issue_comment_list.go
linters:
- dupl
- linters:
- misspell
text: '`Unknwon` is a misspelling of `Unknown`'
- path: models/update.go
linters:
- unused
- path: cmd/dump.go
linters:
- dupl
- text: "commentFormatting: put a space between `//` and comment text"
linters:
- gocritic
- text: "exitAfterDefer:"
linters:
- gocritic
- path: modules/graceful/manager_windows.go
linters:
- staticcheck
text: "svc.IsAnInteractiveSession is deprecated: Use IsWindowsService instead."
- path: models/user/openid.go
linters:
- golint

View File

@@ -1,5 +1,7 @@
version: 2
project_name: gitcaddy-runner
before:
hooks:
- go mod tidy
@@ -63,7 +65,7 @@ builds:
flags:
- -trimpath
ldflags:
- -s -w -X gitea.com/gitea/act_runner/internal/pkg/ver.version={{ .Summary }}
- -s -w -X git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver.version={{ .Summary }}
binary: >-
{{ .ProjectName }}-
{{- .Version }}-
@@ -86,7 +88,7 @@ blobs:
provider: s3
bucket: "{{ .Env.S3_BUCKET }}"
region: "{{ .Env.S3_REGION }}"
directory: "act_runner/{{.Version}}"
directory: "gitcaddy-runner/{{.Version}}"
extra_files:
- glob: ./**.xz
- glob: ./**.sha256
@@ -108,8 +110,8 @@ nightly:
version_template: "nightly"
gitea_urls:
api: https://gitea.com/api/v1
download: https://gitea.com
api: https://git.marketally.com/api/v1
download: https://git.marketally.com
release:
extra_files:

121
HOWTOSTART.md Normal file
View File

@@ -0,0 +1,121 @@
# How to Start a GitCaddy Runner
This guide explains how to set up and start a GitCaddy Actions runner (act_runner) to execute your CI/CD workflows.
## Prerequisites
- A Linux, macOS, or Windows machine
- Network access to your GitCaddy/Gitea instance
- (Optional) Docker installed for container-based workflows
## Quick Start
### 1. Download the Runner
Download the latest release from the [releases page](https://git.marketally.com/gitcaddy/act_runner/releases) or build from source:
```bash
git clone https://git.marketally.com/gitcaddy/act_runner.git
cd act_runner
make build
```
### 2. Register the Runner
Get a registration token from your GitCaddy instance:
- **Global runners**: Admin Area → Actions → Runners → Create Runner
- **Organization runners**: Organization Settings → Actions → Runners
- **Repository runners**: Repository Settings → Actions → Runners
Then register:
```bash
./act_runner register --no-interactive \
--instance https://your-gitea-instance.com \
--token YOUR_REGISTRATION_TOKEN \
--name my-runner \
--labels linux,ubuntu-latest
```
### 3. Start the Runner
```bash
./act_runner daemon
```
## Configuration Options
### Runner Labels
Labels determine which jobs the runner can execute. Configure labels during registration or edit them in the admin UI.
Common labels:
- `linux`, `linux-latest` - Linux runners
- `windows`, `windows-latest` - Windows runners
- `macos`, `macos-latest` - macOS runners
- `ubuntu`, `ubuntu-latest` - Ubuntu-specific
- `self-hosted` - Self-hosted runners
### Running as a Service
#### Linux (systemd)
```bash
sudo cat > /etc/systemd/system/act_runner.service << 'SERVICE'
[Unit]
Description=GitCaddy Actions Runner
After=network.target
[Service]
Type=simple
User=runner
WorkingDirectory=/opt/act_runner
ExecStart=/opt/act_runner/act_runner daemon
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
SERVICE
sudo systemctl enable act_runner
sudo systemctl start act_runner
```
### Docker Support
For workflows that use container actions, ensure Docker is installed and the runner user has access:
```bash
sudo usermod -aG docker $USER
```
## Capabilities Detection
The runner automatically detects and reports:
- Operating system and architecture
- Available shells (bash, sh, powershell)
- Installed tools (node, python, go, etc.)
- Docker availability
- Disk space and network bandwidth
These capabilities help admins understand what each runner can handle.
## Troubleshooting
### Runner not connecting
1. Check network connectivity to your GitCaddy instance
2. Verify the registration token is valid
3. Check firewall rules allow outbound HTTPS
### Jobs not running
1. Verify runner labels match the job's `runs-on` requirement
2. Check runner is online in the admin panel
3. Review runner logs: `journalctl -u act_runner -f`
## More Information
- [act_runner Repository](https://git.marketally.com/gitcaddy/act_runner)
- [GitCaddy Documentation](https://git.marketally.com/gitcaddy/gitea)

View File

@@ -1,5 +1,5 @@
DIST := dist
EXECUTABLE := act_runner
EXECUTABLE := gitcaddy-runner
GOFMT ?= gofumpt -l
DIST_DIRS := $(DIST)/binaries $(DIST)/release
GO ?= go
@@ -15,7 +15,7 @@ WINDOWS_ARCHS ?= windows/amd64
GO_FMT_FILES := $(shell find . -type f -name "*.go" ! -name "generated.*")
GOFILES := $(shell find . -type f -name "*.go" -o -name "go.mod" ! -name "generated.*")
DOCKER_IMAGE ?= gitea/act_runner
DOCKER_IMAGE ?= marketally/gitcaddy-runner
DOCKER_TAG ?= nightly
DOCKER_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)
DOCKER_ROOTLESS_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)-dind-rootless
@@ -67,11 +67,11 @@ else
endif
endif
GO_PACKAGES_TO_VET ?= $(filter-out gitea.com/gitea/act_runner/internal/pkg/client/mocks,$(shell $(GO) list ./...))
GO_PACKAGES_TO_VET ?= $(filter-out git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client/mocks,$(shell $(GO) list ./...))
TAGS ?=
LDFLAGS ?= -X "gitea.com/gitea/act_runner/internal/pkg/ver.version=v$(RELASE_VERSION)"
LDFLAGS ?= -X "git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver.version=v$(RELASE_VERSION)"
all: build
@@ -117,8 +117,7 @@ test: fmt-check security-check
.PHONY: vet
vet:
@echo "Running go vet..."
@$(GO) build code.gitea.io/gitea-vet
@$(GO) vet -vettool=gitea-vet $(GO_PACKAGES_TO_VET)
@$(GO) vet $(GO_PACKAGES_TO_VET)
install: $(GOFILES)
$(GO) install -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)'

523
README.md
View File

@@ -1,121 +1,81 @@
# GitCaddy Act Runner
# GitCaddy Runner
A Gitea Actions runner with enhanced capability detection and reporting for AI-friendly workflow generation.
> **This is a GitCaddy fork** of [gitea.com/gitea/act_runner](https://gitea.com/gitea/act_runner) with runner capability discovery features.
GitCaddy Runner is a hard fork of Gitea's act_runner, rebranded and enhanced with automated capability detection to enable AI tools to generate compatible workflows based on available resources.
## Overview
## Features
Act Runner executes Gitea Actions workflows using [act](https://github.com/nektos/act). This fork adds automatic capability detection, enabling Gitea to expose runner capabilities via API for AI tools to query before generating workflows.
## Key Features
- **Capability Detection**: Automatically detects OS, architecture, Docker support, available shells, and installed tools
- **Capability Reporting**: Reports capabilities to Gitea server during runner declaration
- **Full Compatibility**: Drop-in replacement for standard act_runner
- **Multi-Platform**: Supports Linux, macOS, and Windows
- **Automated Capability Detection**: Automatically identifies OS, architecture, installed tools, and available resources
- **Concurrent Job Execution**: Configure runner capacity to handle multiple jobs simultaneously
- **Docker Support**: Full support for Docker and Docker Compose workflows
- **Xcode Integration**: Detects Xcode installations, SDKs, and simulators on macOS
- **Tool Detection**: Identifies installed tools (Node.js, Python, .NET, Go, Ruby, Swift, etc.)
- **AI-Friendly API**: Exposes capabilities through Gitea's API for automated workflow generation
- **Cache Support**: Built-in workflow cache support for faster builds
## Installation
### Download Pre-built Binary
### Pre-built Binaries
Download from [Releases](https://git.marketally.com/gitcaddy/act_runner/releases):
Download the latest release for your platform from the [releases page](https://git.marketally.com/gitcaddy/gitcaddy-runner/releases):
**macOS:**
```bash
# Linux (amd64)
curl -L -o act_runner https://git.marketally.com/gitcaddy/act_runner/releases/download/v0.3.1-gitcaddy/act_runner-linux-amd64
chmod +x act_runner
# Apple Silicon (M1/M2/M3/M4)
curl -L -o gitcaddy-runner https://git.marketally.com/gitcaddy/gitcaddy-runner/releases/download/v1.0.0/gitcaddy-runner-1.0.0-darwin-arm64
chmod +x gitcaddy-runner
# macOS (Apple Silicon)
curl -L -o act_runner https://git.marketally.com/gitcaddy/act_runner/releases/download/v0.3.1-gitcaddy/act_runner-darwin-arm64
chmod +x act_runner
# Intel
curl -L -o gitcaddy-runner https://git.marketally.com/gitcaddy/gitcaddy-runner/releases/download/v1.0.0/gitcaddy-runner-1.0.0-darwin-amd64
chmod +x gitcaddy-runner
```
**Linux:**
```bash
# x86_64
curl -L -o gitcaddy-runner https://git.marketally.com/gitcaddy/gitcaddy-runner/releases/download/v1.0.0/gitcaddy-runner-1.0.0-linux-amd64
chmod +x gitcaddy-runner
# ARM64
curl -L -o gitcaddy-runner https://git.marketally.com/gitcaddy/gitcaddy-runner/releases/download/v1.0.0/gitcaddy-runner-1.0.0-linux-arm64
chmod +x gitcaddy-runner
```
**Windows:**
```powershell
# Download the Windows executable
# https://git.marketally.com/gitcaddy/gitcaddy-runner/releases/download/v1.0.0/gitcaddy-runner-1.0.0-windows-amd64.exe
```
### Build from Source
```bash
git clone https://git.marketally.com/gitcaddy/act_runner.git
cd act_runner
git clone https://git.marketally.com/gitcaddy/gitcaddy-runner.git
cd gitcaddy-runner
make build
```
## Quick Start
### 1. Enable Actions in Gitea
### 1. Enable Gitea Actions
Add to your Gitea `app.ini`:
In your Gitea `app.ini`:
```ini
[actions]
ENABLED = true
```
### 2. Register the Runner
### 2. Generate Configuration
```bash
./act_runner register \
--instance https://your-gitea-instance.com \
--token YOUR_RUNNER_TOKEN \
--name my-runner \
--labels ubuntu-latest,docker
./gitcaddy-runner generate-config > config.yaml
```
### 3. Start the Runner
### 3. Configure the Runner
```bash
./act_runner daemon
```
On startup, the runner will:
1. Detect system capabilities (OS, arch, Docker, shells, tools)
2. Report capabilities to Gitea via the Declare API
3. Begin polling for jobs
## Capability Detection
The runner automatically detects:
| Category | Examples |
|----------|----------|
| **OS/Arch** | linux/amd64, darwin/arm64, windows/amd64 |
| **Container Runtime** | Docker, Podman |
| **Shells** | bash, sh, zsh, powershell, cmd |
| **Tools** | Node.js, Go, Python, Java, .NET, Rust |
| **Features** | Cache support, Docker Compose |
### Example Capabilities JSON
```json
{
"os": "linux",
"arch": "amd64",
"docker": true,
"docker_compose": true,
"container_runtime": "docker",
"shell": ["bash", "sh"],
"tools": {
"node": ["18.19.0"],
"go": ["1.21.5"],
"python": ["3.11.6"]
},
"features": {
"cache": true,
"docker_services": true
},
"limitations": []
}
```
## Configuration
Create a config file or use command-line flags:
```bash
./act_runner generate-config > config.yaml
./act_runner -c config.yaml daemon
```
Example configuration:
Edit `config.yaml` to customize settings. **Important configuration options:**
```yaml
log:
@@ -123,73 +83,380 @@ log:
runner:
file: .runner
capacity: 1
capacity: 2 # Number of concurrent jobs (default: 1)
timeout: 3h
insecure: false
fetch_timeout: 5s
fetch_interval: 2s
labels:
- ubuntu-latest:docker://node:18-bullseye
- ubuntu-22.04:docker://ubuntu:22.04
container:
docker_host: ""
force_pull: false
privileged: false
- "ubuntu-latest:docker://node:16-bullseye"
- "ubuntu-22.04:docker://node:16-bullseye"
cache:
enabled: true
dir: ~/.cache/actcache
dir: ""
container:
network: ""
privileged: false
options: ""
valid_volumes: []
docker_host: ""
force_pull: false
host:
workdir_parent: ""
```
## Docker Deployment
#### Capacity Configuration
The `capacity` setting controls how many jobs the runner can execute simultaneously:
- **Default**: 1 (one job at a time)
- **Recommended**: 2-4 for multi-core systems
- **Considerations**:
- Each job consumes CPU, memory, and disk I/O
- iOS/macOS builds are resource-intensive (start with 2)
- Lighter builds (Node.js, Go) can handle higher capacity (4-6)
- Monitor system load and adjust accordingly
**Example for different workloads:**
```yaml
# Light builds (web apps, APIs)
runner:
capacity: 4
# Mixed builds
runner:
capacity: 2
# Heavy builds (iOS/macOS, large containers)
runner:
capacity: 1
```
### 4. Register the Runner
```bash
docker run -d \
--name act_runner \
-e GITEA_INSTANCE_URL=https://your-gitea.com \
-e GITEA_RUNNER_REGISTRATION_TOKEN=<token> \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./data:/data \
gitcaddy/act_runner:latest
./gitcaddy-runner register \
--instance https://your-gitea-instance.com \
--token YOUR_REGISTRATION_TOKEN \
--name my-runner \
--labels ubuntu-latest:docker://node:16-bullseye
```
## GitCaddy Integration
The registration token can be obtained from:
- Gitea Admin Panel > Actions > Runners
- Or repository Settings > Actions > Runners
This runner is designed to work with the [GitCaddy Gitea fork](https://git.marketally.com/gitcaddy/gitea), which provides:
### 5. Start the Runner
- **Runner Capabilities API** (`/api/v2/repos/{owner}/{repo}/actions/runners/capabilities`)
- **Workflow Validation API** for pre-flight checks
- **Action Compatibility Database** for GitHub Actions mapping
### How It Works
**Important:** Always specify the config file path with `-c` flag:
```bash
./gitcaddy-runner daemon -c config.yaml
```
act_runner Gitea AI Tool
| | |
| Declare + Capabilities | |
|---------------------------->| |
| | |
| | GET /api/v2/.../caps |
| |<------------------------|
| | |
| | Runner capabilities |
| |------------------------>|
| | |
| | Generates workflow |
| | with correct config |
**Without the `-c` flag, the runner will use default settings and ignore your config.yaml!**
## Running as a Service
### macOS (launchd)
Create `~/Library/LaunchAgents/com.gitcaddy.runner.plist`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.gitcaddy.runner</string>
<key>ProgramArguments</key>
<array>
<string>/path/to/gitcaddy-runner</string>
<string>daemon</string>
<string>-c</string>
<string>/path/to/config.yaml</string>
</array>
<key>WorkingDirectory</key>
<string>/path/to/runner/directory</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/path/to/runner.log</string>
<key>StandardErrorPath</key>
<string>/path/to/runner.err</string>
</dict>
</plist>
```
## Related Projects
Load the service:
| Project | Description |
|---------|-------------|
| [gitcaddy/gitea](https://git.marketally.com/gitcaddy/gitea) | Gitea with AI-friendly enhancements |
| [gitcaddy/actions-proto-go](https://git.marketally.com/gitcaddy/actions-proto-go) | Protocol definitions with capability support |
```bash
launchctl load ~/Library/LaunchAgents/com.gitcaddy.runner.plist
```
## Upstream
### Linux (systemd)
This project is a fork of [gitea.com/gitea/act_runner](https://gitea.com/gitea/act_runner). We contribute enhancements back to upstream where appropriate.
Create `/etc/systemd/system/gitcaddy-runner.service`:
```ini
[Unit]
Description=GitCaddy Actions Runner
After=network.target
[Service]
Type=simple
User=runner
WorkingDirectory=/home/runner/gitcaddy-runner
ExecStart=/home/runner/gitcaddy-runner/gitcaddy-runner daemon -c /home/runner/gitcaddy-runner/config.yaml
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
```
Enable and start:
```bash
sudo systemctl daemon-reload
sudo systemctl enable gitcaddy-runner
sudo systemctl start gitcaddy-runner
```
## Capability Detection
GitCaddy Runner automatically detects and reports system capabilities:
### Platform Information
- Operating system (darwin, linux, windows)
- Architecture (amd64, arm64)
### Container Runtime
- Docker availability and version
- Docker Compose support
- Container runtime type
### Development Tools
- Node.js, npm, yarn
- Python, pip
- Go
- .NET
- Ruby
- Rust
- Java
- Swift (macOS)
- Git, Make
### macOS-Specific
- Xcode version and build
- Available SDKs (iOS, macOS, tvOS, watchOS, visionOS)
- Simulators
- Code signing tools (codesign, pkgbuild)
- Package managers (Homebrew, CocoaPods, Fastlane)
### System Resources
- CPU cores
- Load average
- Disk space and usage
- Network bandwidth
### Example Capabilities Output
```json
{
"os": "darwin",
"arch": "arm64",
"capacity": 2,
"docker": true,
"docker_compose": true,
"container_runtime": "docker",
"xcode": {
"version": "15.2",
"build": "15C500b",
"sdks": ["iOS 17.2", "macOS 14.2"]
},
"tools": {
"node": ["20.11"],
"python": ["3.11"],
"swift": ["5.9"]
},
"build_tools": ["fastlane", "cocoapods", "codesign"],
"cpu": {
"num_cpu": 10,
"load_percent": 25.5
},
"disk": {
"free_bytes": 54199226368,
"used_percent": 77.89
}
}
```
## Configuration Reference
### Runner Section
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `capacity` | int | 1 | Maximum concurrent jobs |
| `timeout` | duration | 3h | Maximum job execution time |
| `insecure` | bool | false | Allow insecure HTTPS |
| `fetch_timeout` | duration | 5s | Timeout for fetching tasks |
| `fetch_interval` | duration | 2s | Interval between task fetches |
| `labels` | []string | [] | Runner labels for job matching |
| `env_file` | string | .env | Environment variables file |
### Cache Section
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `enabled` | bool | true | Enable cache support |
| `dir` | string | "" | Cache directory path |
| `host` | string | "" | External cache server host |
| `port` | int | 0 | External cache server port |
### Container Section
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `network` | string | "" | Docker network for containers |
| `privileged` | bool | false | Run containers in privileged mode |
| `docker_host` | string | "" | Custom Docker host |
| `force_pull` | bool | false | Always pull latest images |
## Troubleshooting
### Capacity Not Being Applied
**Problem:** Runner shows `"capacity":1` even though config.yaml has `capacity: 2`
**Solution:** Ensure you're using the `-c` flag when starting the daemon:
```bash
# ✅ Correct
./gitcaddy-runner daemon -c /path/to/config.yaml
# ❌ Wrong - uses defaults
./gitcaddy-runner daemon
```
Verify the config is being loaded:
```bash
# Check runner process
ps aux | grep gitcaddy-runner
# Should show: gitcaddy-runner daemon -c /path/to/config.yaml
```
### Docker Not Detected
**Problem:** Capabilities show `"docker":false` but Docker is installed
**Solution:**
1. Ensure Docker Desktop/daemon is running:
```bash
docker ps
```
2. Restart the runner after starting Docker:
```bash
./gitcaddy-runner daemon -c config.yaml
```
3. Check Docker socket permissions:
```bash
ls -l /var/run/docker.sock
```
### Jobs Not Running Concurrently
**Problem:** Jobs queue instead of running in parallel
**Checklist:**
1. Verify capacity in capabilities output (check runner logs)
2. Confirm config.yaml has `capacity > 1`
3. Ensure runner was started with `-c config.yaml` flag
4. Check system resources aren't maxed out
5. Restart runner after config changes
### Runner Not Starting
**Problem:** Runner exits immediately or fails to start
**Common causes:**
1. Invalid config.yaml syntax
2. `.runner` file missing (run `register` first)
3. Permission issues on working directory
4. Invalid Gitea instance URL or token
**Debug steps:**
```bash
# Check config syntax
./gitcaddy-runner generate-config > test-config.yaml
diff config.yaml test-config.yaml
# Test with verbose logging
./gitcaddy-runner daemon -c config.yaml --log-level debug
# Verify registration
cat .runner
```
## Environment Variables
GitCaddy Runner supports environment variable configuration:
| Variable | Description | Example |
|----------|-------------|---------|
| `GITEA_RUNNER_CAPACITY` | Override capacity setting | `GITEA_RUNNER_CAPACITY=2` |
| `GITEA_RUNNER_ENV_FILE` | Custom env file path | `GITEA_RUNNER_ENV_FILE=.env.prod` |
## API Integration
Query runner capabilities via Gitea API:
```bash
curl -H "Authorization: token YOUR_TOKEN" \
https://your-gitea.com/api/v1/runners
```
Use capabilities to generate compatible workflows:
```yaml
# Example: Use capabilities to select appropriate runner
name: Build
on: [push]
jobs:
build:
runs-on: ${{ capabilities.os == 'darwin' && 'macos-latest' || 'ubuntu-latest' }}
steps:
- uses: actions/checkout@v3
```
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Submit a pull request
## License
MIT License - see [LICENSE](LICENSE) for details.
## Support
- Issues: https://git.marketally.com/gitcaddy/gitcaddy-runner/issues
- Discussions: https://git.marketally.com/gitcaddy/gitcaddy-runner/discussions
## Acknowledgments
GitCaddy Runner is a hard fork of [Gitea's act_runner](https://gitea.com/gitea/act_runner), rebranded and enhanced with automated capability detection and reporting features for AI-friendly workflow generation.

1
VERSION Normal file
View File

@@ -0,0 +1 @@
1.0.0

BIN
act_runner_test Executable file
View File

Binary file not shown.

39
cmd/upload-helper/main.go Normal file
View File

@@ -0,0 +1,39 @@
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
// Package main provides the upload-helper CLI tool for reliable file uploads.
package main
import (
"flag"
"fmt"
"os"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/artifact"
)
func main() {
url := flag.String("url", "", "Upload URL")
token := flag.String("token", "", "Auth token")
file := flag.String("file", "", "File to upload")
retries := flag.Int("retries", 5, "Maximum retry attempts")
flag.Parse()
if *url == "" || *token == "" || *file == "" {
fmt.Fprintf(os.Stderr, "GitCaddy Upload Helper - Reliable file uploads with retry\n\n")
fmt.Fprintf(os.Stderr, "Usage: gitcaddy-upload -url URL -token TOKEN -file FILE\n\n")
fmt.Fprintf(os.Stderr, "Options:\n")
flag.PrintDefaults()
os.Exit(1)
}
helper := artifact.NewUploadHelper()
helper.MaxRetries = *retries
if err := helper.UploadWithRetry(*url, *token, *file); err != nil {
fmt.Fprintf(os.Stderr, "Upload failed: %v\n", err)
os.Exit(1)
}
fmt.Println("Upload succeeded!")
}

4
go.mod
View File

@@ -1,4 +1,4 @@
module gitea.com/gitea/act_runner
module git.marketally.com/gitcaddy/gitcaddy-runner
go 1.24.0
@@ -111,4 +111,4 @@ replace github.com/go-git/go-git/v5 => github.com/go-git/go-git/v5 v5.16.2
replace github.com/distribution/reference v0.6.0 => github.com/distribution/reference v0.5.0
// Use GitCaddy fork with capability support
replace code.gitea.io/actions-proto-go => git.marketally.com/gitcaddy/actions-proto-go v0.5.6
replace code.gitea.io/actions-proto-go => git.marketally.com/gitcaddy/actions-proto-go v0.5.8

6
go.sum
View File

@@ -6,8 +6,10 @@ cyphar.com/go-pathrs v0.2.1 h1:9nx1vOgwVvX1mNBWDu93+vaceedpbsDqo+XuBGL40b8=
cyphar.com/go-pathrs v0.2.1/go.mod h1:y8f1EMG7r+hCuFf/rXsKqMJrJAUoADZGNh5/vZPKcGc=
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
git.marketally.com/gitcaddy/actions-proto-go v0.5.6 h1:G7T0vpx8HyCFWd0YMJ9sp8rCsWtzFrCJK4BMdOFJa1A=
git.marketally.com/gitcaddy/actions-proto-go v0.5.6/go.mod h1:RPu21UoRD3zSAujoZR6LJwuVNa2uFRBveadslczCRfQ=
git.marketally.com/gitcaddy/actions-proto-go v0.5.7 h1:RUbafr3Vkw2l4WfSwa+oF+Ihakbm05W0FlAmXuQrDJc=
git.marketally.com/gitcaddy/actions-proto-go v0.5.7/go.mod h1:RPu21UoRD3zSAujoZR6LJwuVNa2uFRBveadslczCRfQ=
git.marketally.com/gitcaddy/actions-proto-go v0.5.8 h1:MBipeHvY6A0jcobvziUtzgatZTrV4fs/HE1rPQxREN4=
git.marketally.com/gitcaddy/actions-proto-go v0.5.8/go.mod h1:RPu21UoRD3zSAujoZR6LJwuVNa2uFRBveadslczCRfQ=
gitea.com/gitea/act v0.261.7-0.20251202193638-5417d3ac6742 h1:ulcquQluJbmNASkh6ina70LvcHEa9eWYfQ+DeAZ0VEE=
gitea.com/gitea/act v0.261.7-0.20251202193638-5417d3ac6742/go.mod h1:Pg5C9kQY1CEA3QjthjhlrqOC/QOT5NyWNjOjRHw23Ok=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=

View File

@@ -9,7 +9,7 @@ import (
"os"
"os/signal"
"gitea.com/gitea/act_runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"github.com/nektos/act/pkg/artifactcache"
log "github.com/sirupsen/logrus"
@@ -22,8 +22,8 @@ type cacheServerArgs struct {
Port uint16
}
func runCacheServer(ctx context.Context, configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
func runCacheServer(_ context.Context, configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
return func(_ *cobra.Command, _ []string) error {
cfg, err := config.LoadDefault(*configFile)
if err != nil {
return fmt.Errorf("invalid configuration: %w", err)

View File

@@ -1,6 +1,7 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Copyright 2022 The Gitea Authors and MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
// Package cmd provides the CLI commands for gitcaddy-runner.
package cmd
import (
@@ -10,14 +11,16 @@ import (
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/ver"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/cleanup"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver"
)
// Execute runs the root command for gitcaddy-runner CLI.
func Execute(ctx context.Context) {
// ./act_runner
// ./gitcaddy-runner
rootCmd := &cobra.Command{
Use: "act_runner [event name to run]\nIf no event name passed, will default to \"on: push\"",
Use: "gitcaddy-runner [event name to run]\nIf no event name passed, will default to \"on: push\"",
Short: "Run GitHub actions locally by specifying the event name (e.g. `push`) or an action name directly.",
Args: cobra.MaximumNArgs(1),
Version: ver.Version(),
@@ -26,7 +29,7 @@ func Execute(ctx context.Context) {
configFile := ""
rootCmd.PersistentFlags().StringVarP(&configFile, "config", "c", "", "Config file path")
// ./act_runner register
// ./gitcaddy-runner register
var regArgs registerArgs
registerCmd := &cobra.Command{
Use: "register",
@@ -35,14 +38,14 @@ func Execute(ctx context.Context) {
RunE: runRegister(ctx, &regArgs, &configFile), // must use a pointer to regArgs
}
registerCmd.Flags().BoolVar(&regArgs.NoInteractive, "no-interactive", false, "Disable interactive mode")
registerCmd.Flags().StringVar(&regArgs.InstanceAddr, "instance", "", "Gitea instance address")
registerCmd.Flags().StringVar(&regArgs.InstanceAddr, "instance", "", "GitCaddy instance address")
registerCmd.Flags().StringVar(&regArgs.Token, "token", "", "Runner token")
registerCmd.Flags().StringVar(&regArgs.RunnerName, "name", "", "Runner name")
registerCmd.Flags().StringVar(&regArgs.Labels, "labels", "", "Runner tags, comma separated")
registerCmd.Flags().BoolVar(&regArgs.Ephemeral, "ephemeral", false, "Configure the runner to be ephemeral and only ever be able to pick a single job (stricter than --once)")
rootCmd.AddCommand(registerCmd)
// ./act_runner daemon
// ./gitcaddy-runner daemon
var daemArgs daemonArgs
daemonCmd := &cobra.Command{
Use: "daemon",
@@ -53,10 +56,10 @@ func Execute(ctx context.Context) {
daemonCmd.Flags().BoolVar(&daemArgs.Once, "once", false, "Run one job then exit")
rootCmd.AddCommand(daemonCmd)
// ./act_runner exec
// ./gitcaddy-runner exec
rootCmd.AddCommand(loadExecCmd(ctx))
// ./act_runner config
// ./gitcaddy-runner config
rootCmd.AddCommand(&cobra.Command{
Use: "generate-config",
Short: "Generate an example config file",
@@ -66,7 +69,7 @@ func Execute(ctx context.Context) {
},
})
// ./act_runner cache-server
// ./gitcaddy-runner cache-server
var cacheArgs cacheServerArgs
cacheCmd := &cobra.Command{
Use: "cache-server",
@@ -79,6 +82,31 @@ func Execute(ctx context.Context) {
cacheCmd.Flags().Uint16VarP(&cacheArgs.Port, "port", "p", 0, "Port of the cache server")
rootCmd.AddCommand(cacheCmd)
// ./gitcaddy-runner cleanup
cleanupCmd := &cobra.Command{
Use: "cleanup",
Short: "Manually trigger cleanup to free disk space",
Args: cobra.MaximumNArgs(0),
RunE: func(_ *cobra.Command, _ []string) error {
cfg, err := config.LoadDefault(configFile)
if err != nil {
return fmt.Errorf("failed to load config: %w", err)
}
result, err := cleanup.RunCleanup(ctx, cfg)
if err != nil {
return fmt.Errorf("cleanup failed: %w", err)
}
fmt.Printf("Cleanup completed: freed %d bytes, deleted %d files in %s\n", result.BytesFreed, result.FilesDeleted, result.Duration)
if len(result.Errors) > 0 {
fmt.Printf("Warnings: %d errors occurred\n", len(result.Errors))
for _, e := range result.Errors {
fmt.Printf(" - %s\n", e)
}
}
return nil
},
}
rootCmd.AddCommand(cleanupCmd)
// hide completion command
rootCmd.CompletionOptions.HiddenDefaultCmd = true

View File

@@ -14,6 +14,7 @@ import (
"slices"
"strconv"
"strings"
"sync"
"time"
"connectrpc.com/connect"
@@ -21,13 +22,14 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/app/poll"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/envcheck"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/ver"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/app/poll"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/app/run"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/cleanup"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/envcheck"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/labels"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver"
)
const (
@@ -35,17 +37,36 @@ const (
DiskSpaceWarningThreshold = 85.0
// DiskSpaceCriticalThreshold is the percentage at which to log critical warnings
DiskSpaceCriticalThreshold = 95.0
// DiskSpaceAutoCleanupThreshold is the percentage at which to trigger automatic cleanup
DiskSpaceAutoCleanupThreshold = 85.0
// CleanupCooldown is the minimum time between automatic cleanups
CleanupCooldown = 10 * time.Minute
// CapabilitiesUpdateInterval is how often to update capabilities (including disk space)
CapabilitiesUpdateInterval = 5 * time.Minute
// BandwidthTestInterval is how often to run bandwidth tests (hourly)
BandwidthTestInterval = 1 * time.Hour
)
// Global bandwidth manager - accessible for triggering manual tests
var bandwidthManager *envcheck.BandwidthManager
// Global cleanup state
var (
lastCleanupTime time.Time
cleanupMutex sync.Mutex
globalConfig *config.Config
)
func runDaemon(ctx context.Context, daemArgs *daemonArgs, configFile *string) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
return func(_ *cobra.Command, _ []string) error {
cfg, err := config.LoadDefault(*configFile)
if err != nil {
return fmt.Errorf("invalid configuration: %w", err)
}
// Store config globally for auto-cleanup
globalConfig = cfg
initLogging(cfg)
log.Infoln("Starting runner daemon")
@@ -111,7 +132,7 @@ func runDaemon(ctx context.Context, daemArgs *daemonArgs, configFile *string) fu
return err
}
// if dockerSocketPath passes the check, override DOCKER_HOST with dockerSocketPath
os.Setenv("DOCKER_HOST", dockerSocketPath)
_ = os.Setenv("DOCKER_HOST", dockerSocketPath)
// empty cfg.Container.DockerHost means act_runner need to find an available docker host automatically
// and assign the path to cfg.Container.DockerHost
if cfg.Container.DockerHost == "" {
@@ -152,30 +173,53 @@ func runDaemon(ctx context.Context, daemArgs *daemonArgs, configFile *string) fu
dockerHost = dh
}
}
capabilities := envcheck.DetectCapabilities(ctx, dockerHost)
capabilitiesJson := capabilities.ToJSON()
log.Infof("detected capabilities: %s", capabilitiesJson)
// Initialize bandwidth manager with the Gitea server address
bandwidthManager = envcheck.NewBandwidthManager(reg.Address, BandwidthTestInterval)
bandwidthManager.Start(ctx)
log.Infof("bandwidth manager started, testing against: %s", reg.Address)
capabilities := envcheck.DetectCapabilities(ctx, dockerHost, cfg.Container.WorkdirParent, globalConfig.Runner.Capacity)
// Include initial bandwidth result if available
capabilities.Bandwidth = bandwidthManager.GetLastResult()
capabilitiesJSON := capabilities.ToJSON()
log.Infof("detected capabilities: %s", capabilitiesJSON)
// Check disk space and warn if low
checkDiskSpaceWarnings(capabilities)
checkDiskSpaceAndCleanup(ctx, capabilities)
// declare the labels of the runner before fetching tasks
resp, err := runner.Declare(ctx, ls.Names(), capabilitiesJson)
if err != nil && connect.CodeOf(err) == connect.CodeUnimplemented {
log.Errorf("Your Gitea version is too old to support runner declare, please upgrade to v1.21 or later")
resp, err := runner.Declare(ctx, ls.Names(), capabilitiesJSON)
switch {
case err != nil && connect.CodeOf(err) == connect.CodeUnimplemented:
log.Errorf("Your GitCaddy version is too old to support runner declare, please upgrade to v1.21 or later")
return err
} else if err != nil {
case err != nil:
log.WithError(err).Error("fail to invoke Declare")
return err
} else {
default:
log.Infof("runner: %s, with version: %s, with labels: %v, declare successfully",
resp.Msg.Runner.Name, resp.Msg.Runner.Version, resp.Msg.Runner.Labels)
}
// Start periodic capabilities update goroutine
go periodicCapabilitiesUpdate(ctx, runner, ls.Names(), dockerHost)
go periodicCapabilitiesUpdate(ctx, runner, ls.Names(), dockerHost, cfg.Container.WorkdirParent)
// Start periodic stale job cache cleanup (every hour, remove caches older than 2 hours)
go func() {
ticker := time.NewTicker(1 * time.Hour)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
runner.CleanStaleJobCaches(2 * time.Hour)
}
}
}()
poller := poll.New(cfg, cli, runner)
poller.SetBandwidthManager(bandwidthManager)
if daemArgs.Once || reg.Ephemeral {
done := make(chan struct{})
@@ -209,8 +253,8 @@ func runDaemon(ctx context.Context, daemArgs *daemonArgs, configFile *string) fu
}
}
// checkDiskSpaceWarnings logs warnings if disk space is low
func checkDiskSpaceWarnings(capabilities *envcheck.RunnerCapabilities) {
// checkDiskSpaceAndCleanup logs warnings if disk space is low and triggers cleanup if needed
func checkDiskSpaceAndCleanup(ctx context.Context, capabilities *envcheck.RunnerCapabilities) {
if capabilities.Disk == nil {
return
}
@@ -218,15 +262,56 @@ func checkDiskSpaceWarnings(capabilities *envcheck.RunnerCapabilities) {
usedPercent := capabilities.Disk.UsedPercent
freeGB := float64(capabilities.Disk.Free) / (1024 * 1024 * 1024)
if usedPercent >= DiskSpaceCriticalThreshold {
switch {
case usedPercent >= DiskSpaceCriticalThreshold:
log.Errorf("CRITICAL: Disk space critically low! %.1f%% used, only %.2f GB free. Runner may fail to execute jobs!", usedPercent, freeGB)
} else if usedPercent >= DiskSpaceWarningThreshold {
// Always try cleanup at critical level
triggerAutoCleanup(ctx)
case usedPercent >= DiskSpaceAutoCleanupThreshold:
log.Warnf("WARNING: Disk space at %.1f%% used (%.2f GB free). Triggering automatic cleanup.", usedPercent, freeGB)
triggerAutoCleanup(ctx)
case usedPercent >= DiskSpaceWarningThreshold:
log.Warnf("WARNING: Disk space running low. %.1f%% used, %.2f GB free. Consider cleaning up disk space.", usedPercent, freeGB)
}
}
// periodicCapabilitiesUpdate periodically updates capabilities including disk space
func periodicCapabilitiesUpdate(ctx context.Context, runner *run.Runner, labelNames []string, dockerHost string) {
// triggerAutoCleanup runs cleanup if cooldown has passed
func triggerAutoCleanup(ctx context.Context) {
cleanupMutex.Lock()
defer cleanupMutex.Unlock()
// Check cooldown (except for first run)
if !lastCleanupTime.IsZero() && time.Since(lastCleanupTime) < CleanupCooldown {
log.Debugf("Skipping auto-cleanup, cooldown not expired (last cleanup: %s ago)", time.Since(lastCleanupTime))
return
}
if globalConfig == nil {
log.Warn("Cannot run auto-cleanup: config not available")
return
}
log.Info("Starting automatic disk cleanup...")
lastCleanupTime = time.Now()
go func() {
result, err := cleanup.RunCleanup(ctx, globalConfig)
if err != nil {
log.WithError(err).Error("Auto-cleanup failed")
return
}
log.Infof("Auto-cleanup completed: freed %d bytes, deleted %d files in %s",
result.BytesFreed, result.FilesDeleted, result.Duration)
if len(result.Errors) > 0 {
for _, e := range result.Errors {
log.WithError(e).Warn("Cleanup error")
}
}
}()
}
// periodicCapabilitiesUpdate periodically updates capabilities including disk space and bandwidth
func periodicCapabilitiesUpdate(ctx context.Context, runner *run.Runner, labelNames []string, dockerHost string, workingDir string) {
ticker := time.NewTicker(CapabilitiesUpdateInterval)
defer ticker.Stop()
@@ -234,23 +319,37 @@ func periodicCapabilitiesUpdate(ctx context.Context, runner *run.Runner, labelNa
select {
case <-ctx.Done():
log.Debug("stopping periodic capabilities update")
if bandwidthManager != nil {
bandwidthManager.Stop()
}
return
case <-ticker.C:
// Detect updated capabilities (disk space changes over time)
capabilities := envcheck.DetectCapabilities(ctx, dockerHost)
capabilitiesJson := capabilities.ToJSON()
capabilities := envcheck.DetectCapabilities(ctx, dockerHost, workingDir, globalConfig.Runner.Capacity)
// Include latest bandwidth result
if bandwidthManager != nil {
capabilities.Bandwidth = bandwidthManager.GetLastResult()
}
capabilitiesJSON := capabilities.ToJSON()
// Check for disk space warnings
checkDiskSpaceWarnings(capabilities)
checkDiskSpaceAndCleanup(ctx, capabilities)
// Send updated capabilities to server
_, err := runner.Declare(ctx, labelNames, capabilitiesJson)
_, err := runner.Declare(ctx, labelNames, capabilitiesJSON)
if err != nil {
log.WithError(err).Debug("failed to update capabilities")
} else {
log.Debugf("capabilities updated: disk %.1f%% used, %.2f GB free",
bandwidthInfo := ""
if capabilities.Bandwidth != nil {
bandwidthInfo = fmt.Sprintf(", bandwidth: %.1f Mbps", capabilities.Bandwidth.DownloadMbps)
}
log.Debugf("capabilities updated: disk %.1f%% used, %.2f GB free%s",
capabilities.Disk.UsedPercent,
float64(capabilities.Disk.Free)/(1024*1024*1024))
float64(capabilities.Disk.Free)/(1024*1024*1024),
bandwidthInfo)
}
}
}

View File

@@ -264,7 +264,7 @@ func printList(plan *model.Plan) error {
return nil
}
func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *executeArgs) error {
func runExecList(_ context.Context, planner model.WorkflowPlanner, execArgs *executeArgs) error {
// plan with filtered jobs - to be used for filtering only
var filterPlan *model.Plan
@@ -286,19 +286,20 @@ func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *e
}
var err error
if execArgs.job != "" {
switch {
case execArgs.job != "":
log.Infof("Preparing plan with a job: %s", execArgs.job)
filterPlan, err = planner.PlanJob(execArgs.job)
if err != nil {
return err
}
} else if filterEventName != "" {
case filterEventName != "":
log.Infof("Preparing plan for a event: %s", filterEventName)
filterPlan, err = planner.PlanEvent(filterEventName)
if err != nil {
return err
}
} else {
default:
log.Infof("Preparing plan with all jobs")
filterPlan, err = planner.PlanAll()
if err != nil {
@@ -312,7 +313,7 @@ func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *e
}
func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
return func(_ *cobra.Command, _ []string) error {
planner, err := model.NewWorkflowPlanner(execArgs.WorkflowsPath(), execArgs.noWorkflowRecurse)
if err != nil {
return err
@@ -331,18 +332,19 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
// collect all events from loaded workflows
events := planner.GetEvents()
if len(execArgs.event) > 0 {
switch {
case len(execArgs.event) > 0:
log.Infof("Using chosed event for filtering: %s", execArgs.event)
eventName = execArgs.event
} else if len(events) == 1 && len(events[0]) > 0 {
case len(events) == 1 && len(events[0]) > 0:
log.Infof("Using the only detected workflow event: %s", events[0])
eventName = events[0]
} else if execArgs.autodetectEvent && len(events) > 0 && len(events[0]) > 0 {
case execArgs.autodetectEvent && len(events) > 0 && len(events[0]) > 0:
// set default event type to first event from many available
// this way user dont have to specify the event.
log.Infof("Using first detected workflow event: %s", events[0])
eventName = events[0]
} else {
default:
log.Infof("Using default workflow event: push")
eventName = "push"
}
@@ -388,7 +390,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
if err != nil {
fmt.Println(err)
}
defer os.RemoveAll(tempDir)
defer func() { _ = os.RemoveAll(tempDir) }()
execArgs.artifactServerPath = tempDir
}
@@ -454,7 +456,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
log.Debugf("artifacts server started at %s:%s", execArgs.artifactServerPath, execArgs.artifactServerPort)
ctx = common.WithDryrun(ctx, execArgs.dryrun)
executor := r.NewPlanExecutor(plan).Finally(func(ctx context.Context) error {
executor := r.NewPlanExecutor(plan).Finally(func(_ context.Context) error {
artifactCancel()
return nil
})
@@ -505,7 +507,7 @@ func loadExecCmd(ctx context.Context) *cobra.Command {
execCmd.PersistentFlags().BoolVarP(&execArg.dryrun, "dryrun", "n", false, "dryrun mode")
execCmd.PersistentFlags().StringVarP(&execArg.image, "image", "i", "docker.gitea.com/runner-images:ubuntu-latest", "Docker image to use. Use \"-self-hosted\" to run directly on the host.")
execCmd.PersistentFlags().StringVarP(&execArg.network, "network", "", "", "Specify the network to which the container will connect")
execCmd.PersistentFlags().StringVarP(&execArg.githubInstance, "gitea-instance", "", "", "Gitea instance to use.")
execCmd.PersistentFlags().StringVarP(&execArg.githubInstance, "gitea-instance", "", "", "GitCaddy instance to use.")
return execCmd
}

View File

@@ -20,15 +20,15 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/ver"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/labels"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver"
)
// runRegister registers a runner to the server
func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
return func(_ *cobra.Command, _ []string) error {
log.SetReportCaller(false)
isTerm := isatty.IsTerminal(os.Stdout.Fd())
log.SetFormatter(&log.TextFormatter{
@@ -80,6 +80,7 @@ type registerArgs struct {
type registerStage int8
// Register stage constants define the steps in the registration workflow.
const (
StageUnknown registerStage = -1
StageOverwriteLocalConfig registerStage = iota + 1
@@ -250,7 +251,7 @@ func registerInteractive(ctx context.Context, configFile string, regArgs *regist
if stage == StageWaitingForRegistration {
log.Infof("Registering runner, name=%s, instance=%s, labels=%v.", inputs.RunnerName, inputs.InstanceAddr, inputs.Labels)
if err := doRegister(ctx, cfg, inputs); err != nil {
return fmt.Errorf("Failed to register runner: %w", err)
return fmt.Errorf("failed to register runner: %w", err)
}
log.Infof("Runner registered successfully.")
return nil
@@ -272,7 +273,7 @@ func printStageHelp(stage registerStage) {
case StageOverwriteLocalConfig:
log.Infoln("Runner is already registered, overwrite local config? [y/N]")
case StageInputInstance:
log.Infoln("Enter the Gitea instance URL (for example, https://gitea.com/):")
log.Infoln("Enter the GitCaddy instance URL (for example, https://gitea.com/):")
case StageInputToken:
log.Infoln("Enter the runner token:")
case StageInputRunnerName:
@@ -311,7 +312,7 @@ func registerNoInteractive(ctx context.Context, configFile string, regArgs *regi
return err
}
if err := doRegister(ctx, cfg, inputs); err != nil {
return fmt.Errorf("Failed to register runner: %w", err)
return fmt.Errorf("failed to register runner: %w", err)
}
log.Infof("Runner registered successfully.")
return nil
@@ -341,7 +342,7 @@ func doRegister(ctx context.Context, cfg *config.Config, inputs *registerInputs)
}
if err != nil {
log.WithError(err).
Errorln("Cannot ping the Gitea instance server")
Errorln("Cannot ping the GitCaddy instance server")
// TODO: if ping failed, retry or exit
time.Sleep(time.Second)
} else {

View File

@@ -1,6 +1,7 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// Copyright 2023 The Gitea Authors and MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
// Package poll provides task polling functionality for CI runners.
package poll
import (
@@ -15,17 +16,20 @@ import (
log "github.com/sirupsen/logrus"
"golang.org/x/time/rate"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/envcheck"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/app/run"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/cleanup"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/envcheck"
)
// Poller handles task polling from the Gitea server.
type Poller struct {
client client.Client
runner *run.Runner
cfg *config.Config
tasksVersion atomic.Int64 // tasksVersion used to store the version of the last task fetched from the Gitea.
bandwidthManager *envcheck.BandwidthManager
pollingCtx context.Context
shutdownPolling context.CancelFunc
@@ -36,6 +40,7 @@ type Poller struct {
done chan struct{}
}
// New creates a new Poller instance.
func New(cfg *config.Config, client client.Client, runner *run.Runner) *Poller {
pollingCtx, shutdownPolling := context.WithCancel(context.Background())
@@ -58,6 +63,12 @@ func New(cfg *config.Config, client client.Client, runner *run.Runner) *Poller {
}
}
// SetBandwidthManager sets the bandwidth manager for on-demand testing
func (p *Poller) SetBandwidthManager(bm *envcheck.BandwidthManager) {
p.bandwidthManager = bm
}
// Poll starts polling for tasks with the configured capacity.
func (p *Poller) Poll() {
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
wg := &sync.WaitGroup{}
@@ -71,6 +82,7 @@ func (p *Poller) Poll() {
close(p.done)
}
// PollOnce polls for a single task and then exits.
func (p *Poller) PollOnce() {
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
@@ -80,18 +92,19 @@ func (p *Poller) PollOnce() {
close(p.done)
}
// Shutdown gracefully stops the poller.
func (p *Poller) Shutdown(ctx context.Context) error {
p.shutdownPolling()
select {
// graceful shutdown completed succesfully
// graceful shutdown completed successfully
case <-p.done:
return nil
// our timeout for shutting down ran out
case <-ctx.Done():
// when both the timeout fires and the graceful shutdown
// completed succsfully, this branch of the select may
// completed successfully, this branch of the select may
// fire. Do a non-blocking check here against the graceful
// shutdown status to avoid sending an error if we don't need to.
_, ok := <-p.done
@@ -103,7 +116,7 @@ func (p *Poller) Shutdown(ctx context.Context) error {
p.shutdownJobs()
// wait for running jobs to report their status to Gitea
_, _ = <-p.done
<-p.done
return ctx.Err()
}
@@ -159,14 +172,20 @@ func (p *Poller) fetchTask(ctx context.Context) (*runnerv1.Task, bool) {
defer cancel()
// Detect capabilities including current disk space
caps := envcheck.DetectCapabilities(ctx, p.cfg.Container.DockerHost)
capsJson := caps.ToJSON()
caps := envcheck.DetectCapabilities(ctx, p.cfg.Container.DockerHost, p.cfg.Container.WorkdirParent, p.cfg.Runner.Capacity)
// Include latest bandwidth result if available
if p.bandwidthManager != nil {
caps.Bandwidth = p.bandwidthManager.GetLastResult()
}
capsJSON := caps.ToJSON()
// Load the version value that was in the cache when the request was sent.
v := p.tasksVersion.Load()
fetchReq := &runnerv1.FetchTaskRequest{
TasksVersion: v,
CapabilitiesJson: capsJson,
CapabilitiesJson: capsJSON,
}
resp, err := p.client.FetchTask(reqCtx, connect.NewRequest(fetchReq))
if errors.Is(err, context.DeadlineExceeded) {
@@ -181,6 +200,32 @@ func (p *Poller) fetchTask(ctx context.Context) (*runnerv1.Task, bool) {
return nil, false
}
// Check if server requested a bandwidth test
if resp.Msg.RequestBandwidthTest && p.bandwidthManager != nil {
log.Info("Server requested bandwidth test, running now...")
go func() {
result := p.bandwidthManager.RunTest(ctx)
if result != nil {
log.Infof("Bandwidth test completed: %.1f Mbps download, %.0f ms latency",
result.DownloadMbps, result.Latency)
}
}()
}
// Check if server requested a cleanup
if resp.Msg.RequestCleanup {
log.Info("Server requested cleanup, running now...")
go func() {
result, err := cleanup.RunCleanup(ctx, p.cfg)
if err != nil {
log.Errorf("Cleanup failed: %v", err)
} else if result != nil {
log.Infof("Cleanup completed: freed %d bytes, deleted %d files in %s",
result.BytesFreed, result.FilesDeleted, result.Duration)
}
}()
}
if resp.Msg.TasksVersion > v {
p.tasksVersion.CompareAndSwap(v, resp.Msg.TasksVersion)
}

View File

@@ -1,6 +1,7 @@
// Copyright 2024 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package run provides the core runner functionality for executing tasks.
package run
import (

View File

@@ -7,6 +7,7 @@ import (
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
@@ -21,11 +22,11 @@ import (
"github.com/nektos/act/pkg/runner"
log "github.com/sirupsen/logrus"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/report"
"gitea.com/gitea/act_runner/internal/pkg/ver"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/labels"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/report"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver"
)
// Runner runs the pipeline.
@@ -41,6 +42,50 @@ type Runner struct {
runningTasks sync.Map
}
// getJobCacheDir returns a job-isolated cache directory
func (r *Runner) getJobCacheDir(taskID int64) string {
return filepath.Join(r.cfg.Host.WorkdirParent, "jobs", fmt.Sprintf("%d", taskID))
}
// cleanupJobCache removes the job-specific cache directory after completion
func (r *Runner) cleanupJobCache(taskID int64) {
jobCacheDir := r.getJobCacheDir(taskID)
if err := os.RemoveAll(jobCacheDir); err != nil {
log.Warnf("failed to cleanup job cache %s: %v", jobCacheDir, err)
} else {
log.Infof("cleaned up job cache: %s", jobCacheDir)
}
}
// CleanStaleJobCaches removes job cache directories older than maxAge
func (r *Runner) CleanStaleJobCaches(maxAge time.Duration) {
jobsDir := filepath.Join(r.cfg.Host.WorkdirParent, "jobs")
entries, err := os.ReadDir(jobsDir)
if err != nil {
return // directory may not exist yet
}
cutoff := time.Now().Add(-maxAge)
for _, entry := range entries {
if !entry.IsDir() {
continue
}
info, err := entry.Info()
if err != nil {
continue
}
if info.ModTime().Before(cutoff) {
jobPath := filepath.Join(jobsDir, entry.Name())
if err := os.RemoveAll(jobPath); err != nil {
log.Warnf("failed to remove stale job cache %s: %v", jobPath, err)
} else {
log.Infof("evicted stale job cache: %s", jobPath)
}
}
}
}
// NewRunner creates a new Runner with the given configuration, registration, and client.
func NewRunner(cfg *config.Config, reg *config.Registration, cli client.Client) *Runner {
ls := labels.Labels{}
for _, v := range reg.Labels {
@@ -89,12 +134,14 @@ func NewRunner(cfg *config.Config, reg *config.Registration, cli client.Client)
}
}
// Run executes a task from the server.
func (r *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
if _, ok := r.runningTasks.Load(task.Id); ok {
return fmt.Errorf("task %d is already running", task.Id)
}
r.runningTasks.Store(task.Id, struct{}{})
defer r.runningTasks.Delete(task.Id)
defer r.cleanupJobCache(task.Id)
ctx, cancel := context.WithTimeout(ctx, r.cfg.Runner.Timeout)
defer cancel()
@@ -116,7 +163,7 @@ func (r *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
// getDefaultActionsURL
// when DEFAULT_ACTIONS_URL == "https://github.com" and GithubMirror is not blank,
// it should be set to GithubMirror first.
func (r *Runner) getDefaultActionsURL(ctx context.Context, task *runnerv1.Task) string {
func (r *Runner) getDefaultActionsURL(_ context.Context, task *runnerv1.Task) string {
giteaDefaultActionsURL := task.Context.Fields["gitea_default_actions_url"].GetStringValue()
if giteaDefaultActionsURL == "https://github.com" && r.cfg.Runner.GithubMirror != "" {
return r.cfg.Runner.GithubMirror
@@ -174,8 +221,8 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
preset.Token = t
}
if actionsIdTokenRequestUrl := taskContext["actions_id_token_request_url"].GetStringValue(); actionsIdTokenRequestUrl != "" {
r.envs["ACTIONS_ID_TOKEN_REQUEST_URL"] = actionsIdTokenRequestUrl
if actionsIDTokenRequestURL := taskContext["actions_id_token_request_url"].GetStringValue(); actionsIDTokenRequestURL != "" {
r.envs["ACTIONS_ID_TOKEN_REQUEST_URL"] = actionsIDTokenRequestURL
r.envs["ACTIONS_ID_TOKEN_REQUEST_TOKEN"] = taskContext["actions_id_token_request_token"].GetStringValue()
task.Secrets["ACTIONS_ID_TOKEN_REQUEST_TOKEN"] = r.envs["ACTIONS_ID_TOKEN_REQUEST_TOKEN"]
}
@@ -197,19 +244,30 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
maxLifetime = time.Until(deadline)
}
// Create job-specific environment with isolated cache directories
jobCacheDir := r.getJobCacheDir(task.Id)
jobEnvs := make(map[string]string, len(r.envs)+2)
for k, v := range r.envs {
jobEnvs[k] = v
}
// Isolate golangci-lint cache to prevent parallel job conflicts
jobEnvs["GOLANGCI_LINT_CACHE"] = filepath.Join(jobCacheDir, "golangci-lint")
// Set XDG_CACHE_HOME to isolate other tools that respect it
jobEnvs["XDG_CACHE_HOME"] = jobCacheDir
runnerConfig := &runner.Config{
// On Linux, Workdir will be like "/<parent_directory>/<owner>/<repo>"
// On Windows, Workdir will be like "\<parent_directory>\<owner>\<repo>"
Workdir: filepath.FromSlash(fmt.Sprintf("/%s/%s", strings.TrimLeft(r.cfg.Container.WorkdirParent, "/"), preset.Repository)),
BindWorkdir: false,
ActionCacheDir: filepath.FromSlash(r.cfg.Host.WorkdirParent),
ActionCacheDir: filepath.FromSlash(jobCacheDir),
ReuseContainers: false,
ForcePull: r.cfg.Container.ForcePull,
ForceRebuild: r.cfg.Container.ForceRebuild,
LogOutput: true,
JSONLogger: false,
Env: r.envs,
Env: jobEnvs,
Secrets: task.Secrets,
GitHubInstance: strings.TrimSuffix(r.client.Address(), "/"),
AutoRemove: true,
@@ -249,10 +307,11 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
return execErr
}
func (r *Runner) Declare(ctx context.Context, labels []string, capabilitiesJson string) (*connect.Response[runnerv1.DeclareResponse], error) {
// Declare sends the runner's labels and capabilities to the server.
func (r *Runner) Declare(ctx context.Context, labels []string, capabilitiesJSON string) (*connect.Response[runnerv1.DeclareResponse], error) {
return r.client.Declare(ctx, connect.NewRequest(&runnerv1.DeclareRequest{
Version: ver.Version(),
Labels: labels,
CapabilitiesJson: capabilitiesJson,
CapabilitiesJson: capabilitiesJSON,
}))
}

View File

@@ -0,0 +1,146 @@
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
// Package artifact provides utilities for handling artifact uploads.
package artifact
import (
"bytes"
"fmt"
"io"
"mime/multipart"
"net/http"
"os"
"time"
log "github.com/sirupsen/logrus"
)
// UploadHelper handles reliable file uploads with retry logic
type UploadHelper struct {
MaxRetries int
RetryDelay time.Duration
ChunkSize int64
ConnectTimeout time.Duration
MaxTimeout time.Duration
}
// NewUploadHelper creates a new upload helper with sensible defaults
func NewUploadHelper() *UploadHelper {
return &UploadHelper{
MaxRetries: 5,
RetryDelay: 10 * time.Second,
ChunkSize: 10 * 1024 * 1024, // 10MB
ConnectTimeout: 120 * time.Second,
MaxTimeout: 3600 * time.Second,
}
}
// UploadWithRetry uploads a file with automatic retry on failure
func (u *UploadHelper) UploadWithRetry(url, token, filepath string) error {
client := &http.Client{
Timeout: u.MaxTimeout,
Transport: &http.Transport{
MaxIdleConns: 10,
MaxIdleConnsPerHost: 5,
IdleConnTimeout: 90 * time.Second,
DisableKeepAlives: false, // Keep connections alive
ForceAttemptHTTP2: false, // Use HTTP/1.1 for large uploads
},
}
var lastErr error
for attempt := 0; attempt < u.MaxRetries; attempt++ {
if attempt > 0 {
delay := u.RetryDelay * time.Duration(attempt)
log.Infof("Upload attempt %d/%d, waiting %v before retry...", attempt+1, u.MaxRetries, delay)
time.Sleep(delay)
}
// Pre-resolve DNS / warm connection
if err := u.prewarmConnection(url); err != nil {
lastErr = fmt.Errorf("connection prewarm failed: %w", err)
log.Warnf("Prewarm failed: %v", err)
continue
}
// Attempt upload
if err := u.doUpload(client, url, token, filepath); err != nil {
lastErr = err
log.Warnf("Upload attempt %d failed: %v", attempt+1, err)
continue
}
log.Infof("Upload succeeded on attempt %d", attempt+1)
return nil // Success
}
return fmt.Errorf("upload failed after %d attempts: %w", u.MaxRetries, lastErr)
}
// prewarmConnection establishes a connection to help with DNS and TCP setup
func (u *UploadHelper) prewarmConnection(url string) error {
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return err
}
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Do(req)
if err != nil {
return err
}
_ = resp.Body.Close()
return nil
}
// doUpload performs the actual file upload
func (u *UploadHelper) doUpload(client *http.Client, url, token, filepath string) error {
file, err := os.Open(filepath)
if err != nil {
return fmt.Errorf("failed to open file: %w", err)
}
defer func() { _ = file.Close() }()
stat, err := file.Stat()
if err != nil {
return fmt.Errorf("failed to stat file: %w", err)
}
log.Infof("Uploading %s (%d bytes) to %s", filepath, stat.Size(), url)
// Create multipart form
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile("attachment", stat.Name())
if err != nil {
return fmt.Errorf("failed to create form file: %w", err)
}
if _, err := io.Copy(part, file); err != nil {
return fmt.Errorf("failed to copy file to form: %w", err)
}
_ = writer.Close()
req, err := http.NewRequest("POST", url, body)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Authorization", fmt.Sprintf("token %s", token))
req.Header.Set("Content-Type", writer.FormDataContentType())
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("upload request failed: %w", err)
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
respBody, _ := io.ReadAll(resp.Body)
return fmt.Errorf("upload failed with status %d: %s", resp.StatusCode, string(respBody))
}
log.Infof("Upload completed successfully, status: %d", resp.StatusCode)
return nil
}

View File

@@ -0,0 +1,390 @@
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
// Package cleanup provides disk cleanup utilities for CI runners.
package cleanup
import (
"context"
"fmt"
"os"
"path/filepath"
"time"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/config"
log "github.com/sirupsen/logrus"
)
// Result contains the results of a cleanup operation.
type Result struct {
BytesFreed int64
FilesDeleted int
Errors []error
Duration time.Duration
}
// RunCleanup performs cleanup operations to free disk space.
func RunCleanup(_ context.Context, cfg *config.Config) (*Result, error) {
start := time.Now()
result := &Result{}
log.Info("Starting runner cleanup...")
// 1. Clean old cache directories
cacheDir := filepath.Join(cfg.Cache.Dir, "_cache")
if cacheDir != "" {
if bytes, files, err := cleanOldDir(cacheDir, 24*time.Hour); err != nil {
result.Errors = append(result.Errors, fmt.Errorf("cache cleanup: %w", err))
} else {
result.BytesFreed += bytes
result.FilesDeleted += files
log.Infof("Cleaned cache: freed %d bytes, deleted %d files", bytes, files)
}
}
// 2. Clean old work directories
workDir := cfg.Container.WorkdirParent
if workDir != "" {
if bytes, files, err := cleanOldWorkDirs(workDir, 48*time.Hour); err != nil {
result.Errors = append(result.Errors, fmt.Errorf("workdir cleanup: %w", err))
} else {
result.BytesFreed += bytes
result.FilesDeleted += files
log.Infof("Cleaned work dirs: freed %d bytes, deleted %d files", bytes, files)
}
}
// 3. Clean old artifact staging directories
artifactDir := cfg.Cache.Dir
if bytes, files, err := cleanOldArtifacts(artifactDir, 72*time.Hour); err != nil {
result.Errors = append(result.Errors, fmt.Errorf("artifact cleanup: %w", err))
} else {
result.BytesFreed += bytes
result.FilesDeleted += files
log.Infof("Cleaned artifacts: freed %d bytes, deleted %d files", bytes, files)
}
// 4. Clean system temp files (older than 24h)
if bytes, files, err := cleanTempDir(24 * time.Hour); err != nil {
result.Errors = append(result.Errors, fmt.Errorf("temp cleanup: %w", err))
} else {
result.BytesFreed += bytes
result.FilesDeleted += files
log.Infof("Cleaned temp: freed %d bytes, deleted %d files", bytes, files)
}
// 5. Clean build tool caches (older than 7 days)
// These can grow very large from Go, npm, nuget, gradle, maven builds
if bytes, files, err := cleanBuildCaches(7 * 24 * time.Hour); err != nil {
result.Errors = append(result.Errors, fmt.Errorf("build cache cleanup: %w", err))
} else {
result.BytesFreed += bytes
result.FilesDeleted += files
log.Infof("Cleaned build caches: freed %d bytes, deleted %d files", bytes, files)
}
result.Duration = time.Since(start)
log.Infof("Cleanup completed: freed %s in %s", formatBytes(result.BytesFreed), result.Duration)
return result, nil
}
// cleanOldDir removes files older than maxAge from a directory
func cleanOldDir(dir string, maxAge time.Duration) (int64, int, error) {
if _, err := os.Stat(dir); os.IsNotExist(err) {
return 0, 0, nil
}
var bytesFreed int64
var filesDeleted int
cutoff := time.Now().Add(-maxAge)
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return nil // Skip errors
}
if info.IsDir() {
return nil
}
if info.ModTime().Before(cutoff) {
size := info.Size()
if err := os.Remove(path); err == nil {
bytesFreed += size
filesDeleted++
}
}
return nil
})
return bytesFreed, filesDeleted, err
}
// cleanOldWorkDirs removes work directories older than maxAge
func cleanOldWorkDirs(baseDir string, maxAge time.Duration) (int64, int, error) {
if _, err := os.Stat(baseDir); os.IsNotExist(err) {
return 0, 0, nil
}
var bytesFreed int64
var filesDeleted int
cutoff := time.Now().Add(-maxAge)
entries, err := os.ReadDir(baseDir)
if err != nil {
return 0, 0, err
}
for _, entry := range entries {
if !entry.IsDir() {
continue
}
path := filepath.Join(baseDir, entry.Name())
info, err := entry.Info()
if err != nil {
continue
}
if info.ModTime().Before(cutoff) {
size := dirSize(path)
if err := os.RemoveAll(path); err == nil {
bytesFreed += size
filesDeleted++
log.Debugf("Removed old work dir: %s", path)
}
}
}
return bytesFreed, filesDeleted, nil
}
// cleanOldArtifacts removes artifact staging files older than maxAge
func cleanOldArtifacts(baseDir string, maxAge time.Duration) (int64, int, error) {
if _, err := os.Stat(baseDir); os.IsNotExist(err) {
return 0, 0, nil
}
var bytesFreed int64
var filesDeleted int
cutoff := time.Now().Add(-maxAge)
// Look for artifact staging dirs
patterns := []string{"artifact-*", "upload-*", "download-*"}
for _, pattern := range patterns {
matches, _ := filepath.Glob(filepath.Join(baseDir, pattern))
for _, path := range matches {
info, err := os.Stat(path)
if err != nil {
continue
}
if info.ModTime().Before(cutoff) {
var size int64
if info.IsDir() {
size = dirSize(path)
err = os.RemoveAll(path)
} else {
size = info.Size()
err = os.Remove(path)
}
if err == nil {
bytesFreed += size
filesDeleted++
}
}
}
}
return bytesFreed, filesDeleted, nil
}
// cleanTempDir removes old files from system temp directory
func cleanTempDir(maxAge time.Duration) (int64, int, error) {
tmpDir := os.TempDir()
var bytesFreed int64
var filesDeleted int
cutoff := time.Now().Add(-maxAge)
entries, err := os.ReadDir(tmpDir)
if err != nil {
return 0, 0, err
}
// Only clean files/dirs that look like runner/act artifacts or build tool temp files
runnerPatterns := []string{
"act-", "runner-", "gitea-", "workflow-",
"go-build", "go-link",
"node-compile-cache", "npm-", "yarn-", "yarn--", "pnpm-",
"ts-node-", "tsx-", "jiti", "v8-compile-cache",
"text-diff-expansion-test", "DiagOutputDir",
"dugite-native-", "reorderCommitMessage-", "squashCommitMessage-",
}
for _, entry := range entries {
name := entry.Name()
isRunner := false
for _, p := range runnerPatterns {
if len(name) >= len(p) && name[:len(p)] == p {
isRunner = true
break
}
}
if !isRunner {
continue
}
path := filepath.Join(tmpDir, name)
info, err := entry.Info()
if err != nil {
continue
}
if info.ModTime().Before(cutoff) {
var size int64
if info.IsDir() {
size = dirSize(path)
err = os.RemoveAll(path)
} else {
size = info.Size()
err = os.Remove(path)
}
if err == nil {
bytesFreed += size
filesDeleted++
}
}
}
return bytesFreed, filesDeleted, nil
}
// dirSize calculates the total size of a directory.
func dirSize(path string) int64 {
var size int64
_ = filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return nil
}
if !info.IsDir() {
size += info.Size()
}
return nil
})
return size
}
// cleanBuildCaches removes old build tool caches that accumulate from CI jobs
// These are cleaned more aggressively (files older than 7 days) since they can grow very large
func cleanBuildCaches(maxAge time.Duration) (int64, int, error) {
home := os.Getenv("HOME")
if home == "" {
home = os.Getenv("USERPROFILE") // Windows
}
if home == "" {
home = "/root" // fallback for runners typically running as root
}
var totalBytesFreed int64
var totalFilesDeleted int
// Build cache directories to clean
// Format: {path, description, maxAge (0 = use default)}
// Go build cache cleaned more aggressively (3 days) as it grows very fast
goBuildMaxAge := 3 * 24 * time.Hour
cacheDirs := []struct {
path string
desc string
maxAge time.Duration
}{
// Linux paths
{filepath.Join(home, ".cache", "go-build"), "Go build cache", goBuildMaxAge},
{filepath.Join(home, ".cache", "golangci-lint"), "golangci-lint cache", 0},
{filepath.Join(home, ".npm", "_cacache"), "npm cache", 0},
{filepath.Join(home, ".cache", "pnpm"), "pnpm cache", 0},
{filepath.Join(home, ".cache", "yarn"), "yarn cache", 0},
{filepath.Join(home, ".nuget", "packages"), "NuGet cache", 0},
{filepath.Join(home, ".gradle", "caches"), "Gradle cache", 0},
{filepath.Join(home, ".m2", "repository"), "Maven cache", 0},
{filepath.Join(home, ".cache", "pip"), "pip cache", 0},
{filepath.Join(home, ".cargo", "registry", "cache"), "Cargo cache", 0},
{filepath.Join(home, ".rustup", "tmp"), "Rustup temp", 0},
// macOS paths (Library/Caches)
{filepath.Join(home, "Library", "Caches", "go-build"), "Go build cache (macOS)", goBuildMaxAge},
{filepath.Join(home, "Library", "Caches", "Yarn"), "Yarn cache (macOS)", 0},
{filepath.Join(home, "Library", "Caches", "pip"), "pip cache (macOS)", 0},
{filepath.Join(home, "Library", "Caches", "Homebrew"), "Homebrew cache (macOS)", 0},
// Windows paths (LOCALAPPDATA)
{filepath.Join(os.Getenv("LOCALAPPDATA"), "go-build"), "Go build cache (Windows)", goBuildMaxAge},
{filepath.Join(os.Getenv("LOCALAPPDATA"), "npm-cache"), "npm cache (Windows)", 0},
{filepath.Join(os.Getenv("LOCALAPPDATA"), "pnpm"), "pnpm cache (Windows)", 0},
{filepath.Join(os.Getenv("LOCALAPPDATA"), "Yarn", "Cache"), "Yarn cache (Windows)", 0},
{filepath.Join(os.Getenv("LOCALAPPDATA"), "NuGet", "v3-cache"), "NuGet cache (Windows)", 0},
{filepath.Join(os.Getenv("LOCALAPPDATA"), "pip", "Cache"), "pip cache (Windows)", 0},
// Windows custom paths used by some CI setups
{"C:\\L\\Yarn", "Yarn global cache (Windows)", 0},
{filepath.Join(os.TempDir(), "chocolatey"), "Chocolatey temp cache", 0},
}
for _, cache := range cacheDirs {
if _, err := os.Stat(cache.path); os.IsNotExist(err) {
continue
}
// Use cache-specific maxAge if set, otherwise use default
cacheMaxAge := cache.maxAge
if cacheMaxAge == 0 {
cacheMaxAge = maxAge
}
cutoff := time.Now().Add(-cacheMaxAge)
var bytesFreed int64
var filesDeleted int
err := filepath.Walk(cache.path, func(path string, info os.FileInfo, err error) error {
if err != nil {
return nil // Skip errors
}
if info.IsDir() {
return nil
}
if info.ModTime().Before(cutoff) {
size := info.Size()
if err := os.Remove(path); err == nil {
bytesFreed += size
filesDeleted++
}
}
return nil
})
if err == nil && (bytesFreed > 0 || filesDeleted > 0) {
log.Infof("Cleaned %s: freed %s, deleted %d files", cache.desc, formatBytes(bytesFreed), filesDeleted)
totalBytesFreed += bytesFreed
totalFilesDeleted += filesDeleted
}
// Also remove empty directories
_ = filepath.Walk(cache.path, func(path string, info os.FileInfo, err error) error {
if err != nil || !info.IsDir() || path == cache.path {
return nil
}
entries, _ := os.ReadDir(path)
if len(entries) == 0 {
_ = os.Remove(path)
}
return nil
})
}
return totalBytesFreed, totalFilesDeleted, nil
}
// formatBytes formats bytes into human readable string
func formatBytes(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

View File

@@ -1,6 +1,7 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package client provides the HTTP client for communicating with the runner API.
package client
import (

View File

@@ -3,6 +3,7 @@
package client
// HTTP header constants for runner authentication and identification.
const (
UUIDHeader = "x-runner-uuid"
TokenHeader = "x-runner-token"

View File

@@ -63,10 +63,12 @@ func New(endpoint string, insecure bool, uuid, token, version string, opts ...co
}
}
// Address returns the endpoint URL of the client.
func (c *HTTPClient) Address() string {
return c.endpoint
}
// Insecure returns whether TLS verification is disabled.
func (c *HTTPClient) Insecure() bool {
return c.insecure
}

View File

@@ -1,6 +1,7 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package config provides configuration loading and management for the runner.
package config
import (

View File

@@ -5,5 +5,7 @@ package config
import _ "embed"
// Example contains the example configuration file content.
//
//go:embed config.example.yaml
var Example []byte

View File

@@ -23,12 +23,13 @@ type Registration struct {
Ephemeral bool `json:"ephemeral"`
}
// LoadRegistration loads the runner registration from a JSON file.
func LoadRegistration(file string) (*Registration, error) {
f, err := os.Open(file)
if err != nil {
return nil, err
}
defer f.Close()
defer func() { _ = f.Close() }()
var reg Registration
if err := json.NewDecoder(f).Decode(&reg); err != nil {
@@ -40,12 +41,13 @@ func LoadRegistration(file string) (*Registration, error) {
return &reg, nil
}
// SaveRegistration saves the runner registration to a JSON file.
func SaveRegistration(file string, reg *Registration) error {
f, err := os.Create(file)
if err != nil {
return err
}
defer f.Close()
defer func() { _ = f.Close() }()
reg.Warning = registrationWarning

View File

@@ -0,0 +1,209 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package envcheck
import (
"context"
"fmt"
"io"
"net/http"
"sync"
"time"
)
// BandwidthInfo holds network bandwidth test results
type BandwidthInfo struct {
DownloadMbps float64 `json:"download_mbps"`
UploadMbps float64 `json:"upload_mbps,omitempty"`
Latency float64 `json:"latency_ms,omitempty"`
TestedAt time.Time `json:"tested_at"`
}
// BandwidthManager handles periodic bandwidth testing
type BandwidthManager struct {
serverURL string
lastResult *BandwidthInfo
mu sync.RWMutex
testInterval time.Duration
stopChan chan struct{}
}
// NewBandwidthManager creates a new bandwidth manager
func NewBandwidthManager(serverURL string, testInterval time.Duration) *BandwidthManager {
return &BandwidthManager{
serverURL: serverURL,
testInterval: testInterval,
stopChan: make(chan struct{}),
}
}
// Start begins periodic bandwidth testing
func (bm *BandwidthManager) Start(ctx context.Context) {
// Run initial test
bm.RunTest(ctx)
// Start periodic testing
go func() {
ticker := time.NewTicker(bm.testInterval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
bm.RunTest(ctx)
case <-bm.stopChan:
return
case <-ctx.Done():
return
}
}
}()
}
// Stop stops the periodic testing
func (bm *BandwidthManager) Stop() {
close(bm.stopChan)
}
// RunTest runs a bandwidth test and stores the result
func (bm *BandwidthManager) RunTest(ctx context.Context) *BandwidthInfo {
result := TestBandwidth(ctx, bm.serverURL)
bm.mu.Lock()
bm.lastResult = result
bm.mu.Unlock()
return result
}
// GetLastResult returns the most recent bandwidth test result
func (bm *BandwidthManager) GetLastResult() *BandwidthInfo {
bm.mu.RLock()
defer bm.mu.RUnlock()
return bm.lastResult
}
// TestBandwidth tests network bandwidth to the GitCaddy server
func TestBandwidth(ctx context.Context, serverURL string) *BandwidthInfo {
if serverURL == "" {
return nil
}
info := &BandwidthInfo{
TestedAt: time.Now(),
}
// Test latency first
info.Latency = testLatency(ctx, serverURL)
// Test download speed
info.DownloadMbps = testDownloadSpeed(ctx, serverURL)
return info
}
func testLatency(ctx context.Context, serverURL string) float64 {
client := &http.Client{
Timeout: 10 * time.Second,
}
reqCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(reqCtx, "HEAD", serverURL, nil)
if err != nil {
return 0
}
start := time.Now()
resp, err := client.Do(req)
if err != nil {
return 0
}
_ = resp.Body.Close()
latency := time.Since(start).Seconds() * 1000 // Convert to ms
return float64(int(latency*100)) / 100 // Round to 2 decimals
}
func testDownloadSpeed(ctx context.Context, serverURL string) float64 {
// Try multiple endpoints to accumulate ~1MB of data
endpoints := []string{
"/assets/css/index.css",
"/assets/js/index.js",
"/assets/img/logo.svg",
"/assets/img/logo.png",
"/",
}
client := &http.Client{
Timeout: 30 * time.Second,
}
var totalBytes int64
var totalDuration time.Duration
targetBytes := int64(1024 * 1024) // 1MB target
maxAttempts := 10 // Limit iterations
for attempt := 0; attempt < maxAttempts && totalBytes < targetBytes; attempt++ {
for _, endpoint := range endpoints {
if totalBytes >= targetBytes {
break
}
url := serverURL + endpoint
reqCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
req, err := http.NewRequestWithContext(reqCtx, "GET", url, nil)
if err != nil {
cancel()
continue
}
start := time.Now()
resp, err := client.Do(req)
if err != nil {
cancel()
continue
}
n, _ := io.Copy(io.Discard, resp.Body)
_ = resp.Body.Close()
cancel()
duration := time.Since(start)
if n > 0 {
totalBytes += n
totalDuration += duration
}
}
}
if totalBytes == 0 || totalDuration == 0 {
return 0
}
// Calculate speed in Mbps
seconds := totalDuration.Seconds()
if seconds == 0 {
return 0
}
bytesPerSecond := float64(totalBytes) / seconds
mbps := (bytesPerSecond * 8) / (1024 * 1024)
return float64(int(mbps*100)) / 100
}
// FormatBandwidth formats bandwidth for display
func FormatBandwidth(mbps float64) string {
if mbps == 0 {
return "Unknown"
}
if mbps >= 1000 {
return fmt.Sprintf("%.1f Gbps", mbps/1000)
}
return fmt.Sprintf("%.1f Mbps", mbps)
}

View File

@@ -1,40 +1,75 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
package envcheck
import (
"bufio"
"context"
"encoding/json"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/docker/docker/client"
"golang.org/x/sys/unix"
)
// DiskInfo holds disk space information
type DiskInfo struct {
Path string `json:"path,omitempty"` // Path being checked (working directory)
Total uint64 `json:"total_bytes"`
Free uint64 `json:"free_bytes"`
Used uint64 `json:"used_bytes"`
UsedPercent float64 `json:"used_percent"`
}
// CPUInfo holds CPU load information
type CPUInfo struct {
NumCPU int `json:"num_cpu"` // Number of logical CPUs
LoadAvg1m float64 `json:"load_avg_1m"` // 1-minute load average
LoadAvg5m float64 `json:"load_avg_5m"` // 5-minute load average
LoadAvg15m float64 `json:"load_avg_15m"` // 15-minute load average
LoadPercent float64 `json:"load_percent"` // (load_avg_1m / num_cpu) * 100
}
// DistroInfo holds Linux distribution information
type DistroInfo struct {
ID string `json:"id,omitempty"` // e.g., "ubuntu", "debian", "fedora"
VersionID string `json:"version_id,omitempty"` // e.g., "24.04", "12"
PrettyName string `json:"pretty_name,omitempty"` // e.g., "Ubuntu 24.04 LTS"
}
// XcodeInfo holds Xcode and iOS development information
type XcodeInfo struct {
Version string `json:"version,omitempty"`
Build string `json:"build,omitempty"`
SDKs []string `json:"sdks,omitempty"` // e.g., ["iOS 17.0", "macOS 14.0"]
Simulators []string `json:"simulators,omitempty"` // Available iOS simulators
}
// RunnerCapabilities represents the capabilities of a runner for AI consumption
type RunnerCapabilities struct {
OS string `json:"os"`
Arch string `json:"arch"`
Distro *DistroInfo `json:"distro,omitempty"`
Xcode *XcodeInfo `json:"xcode,omitempty"`
Docker bool `json:"docker"`
DockerCompose bool `json:"docker_compose"`
ContainerRuntime string `json:"container_runtime,omitempty"`
Shell []string `json:"shell,omitempty"`
Tools map[string][]string `json:"tools,omitempty"`
BuildTools []string `json:"build_tools,omitempty"` // Available build/installer tools
PackageManagers []string `json:"package_managers,omitempty"`
Features *CapabilityFeatures `json:"features,omitempty"`
Limitations []string `json:"limitations,omitempty"`
Disk *DiskInfo `json:"disk,omitempty"`
CPU *CPUInfo `json:"cpu,omitempty"`
Bandwidth *BandwidthInfo `json:"bandwidth,omitempty"`
SuggestedLabels []string `json:"suggested_labels,omitempty"`
Capacity int `json:"capacity,omitempty"` // Number of concurrent jobs this runner can handle
}
// CapabilityFeatures represents feature support flags
@@ -46,11 +81,15 @@ type CapabilityFeatures struct {
}
// DetectCapabilities detects the runner's capabilities
func DetectCapabilities(ctx context.Context, dockerHost string) *RunnerCapabilities {
cap := &RunnerCapabilities{
// workingDir is the directory where builds will run (for disk space detection)
func DetectCapabilities(ctx context.Context, dockerHost string, workingDir string, capacity int) *RunnerCapabilities {
caps := &RunnerCapabilities{
Capacity: capacity,
OS: runtime.GOOS,
Arch: runtime.GOARCH,
Tools: make(map[string][]string),
BuildTools: []string{},
PackageManagers: []string{},
Shell: detectShells(),
Features: &CapabilityFeatures{
ArtifactsV4: false, // Gitea doesn't support v4 artifacts
@@ -64,50 +103,239 @@ func DetectCapabilities(ctx context.Context, dockerHost string) *RunnerCapabilit
},
}
// Detect Linux distribution
if runtime.GOOS == "linux" {
caps.Distro = detectLinuxDistro()
}
// Detect macOS Xcode/iOS
if runtime.GOOS == "darwin" {
caps.Xcode = detectXcode(ctx)
}
// Detect Docker
cap.Docker, cap.ContainerRuntime = detectDocker(ctx, dockerHost)
if cap.Docker {
cap.DockerCompose = detectDockerCompose(ctx)
cap.Features.Services = true
caps.Docker, caps.ContainerRuntime = detectDocker(ctx, dockerHost)
if caps.Docker {
caps.DockerCompose = detectDockerCompose(ctx)
caps.Features.Services = true
}
// Detect common tools
detectTools(ctx, cap)
detectTools(ctx, caps)
// Detect disk space
cap.Disk = detectDiskSpace()
// Detect build tools
detectBuildTools(ctx, caps)
return cap
// Detect package managers
detectPackageManagers(ctx, caps)
// Detect disk space on the working directory's filesystem
caps.Disk = detectDiskSpace(workingDir)
// Detect CPU load
caps.CPU = detectCPULoad()
// Generate suggested labels based on detected capabilities
caps.SuggestedLabels = generateSuggestedLabels(caps)
return caps
}
// detectDiskSpace detects disk space on the root filesystem
func detectDiskSpace() *DiskInfo {
var stat unix.Statfs_t
// detectXcode detects Xcode and iOS development capabilities on macOS
func detectXcode(ctx context.Context) *XcodeInfo {
timeoutCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Get stats for root filesystem (or current working directory)
path := "/"
if runtime.GOOS == "windows" {
path = "C:\\"
}
err := unix.Statfs(path, &stat)
// Check for xcodebuild
cmd := exec.CommandContext(timeoutCtx, "xcodebuild", "-version")
output, err := cmd.Output()
if err != nil {
return nil
}
total := stat.Blocks * uint64(stat.Bsize)
free := stat.Bavail * uint64(stat.Bsize)
used := total - free
usedPercent := float64(used) / float64(total) * 100
return &DiskInfo{
Total: total,
Free: free,
Used: used,
UsedPercent: usedPercent,
xcode := &XcodeInfo{}
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if strings.HasPrefix(line, "Xcode ") {
xcode.Version = strings.TrimPrefix(line, "Xcode ")
} else if strings.HasPrefix(line, "Build version ") {
xcode.Build = strings.TrimPrefix(line, "Build version ")
}
}
// Get available SDKs
cmd = exec.CommandContext(timeoutCtx, "xcodebuild", "-showsdks")
output, err = cmd.Output()
if err == nil {
lines = strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
// Look for SDK lines like "-sdk iphoneos17.0" or "iOS 17.0"
if strings.Contains(line, "SDK") || strings.HasPrefix(line, "-sdk") {
continue // Skip header lines
}
if strings.Contains(line, "iOS") || strings.Contains(line, "macOS") ||
strings.Contains(line, "watchOS") || strings.Contains(line, "tvOS") ||
strings.Contains(line, "visionOS") || strings.Contains(line, "xrOS") {
// Extract SDK name
if idx := strings.Index(line, "-sdk"); idx != -1 {
sdkPart := strings.TrimSpace(line[:idx])
if sdkPart != "" {
xcode.SDKs = append(xcode.SDKs, sdkPart)
}
}
}
}
}
// Get available simulators
cmd = exec.CommandContext(timeoutCtx, "xcrun", "simctl", "list", "devices", "available", "-j")
output, err = cmd.Output()
if err == nil {
var simData struct {
Devices map[string][]struct {
Name string `json:"name"`
State string `json:"state"`
} `json:"devices"`
}
if json.Unmarshal(output, &simData) == nil {
seen := make(map[string]bool)
for runtime, devices := range simData.Devices {
if strings.Contains(runtime, "iOS") {
for _, dev := range devices {
key := dev.Name
if !seen[key] {
seen[key] = true
xcode.Simulators = append(xcode.Simulators, dev.Name)
}
}
}
}
}
}
if xcode.Version == "" {
return nil
}
return xcode
}
// detectLinuxDistro reads /etc/os-release to get distribution info
func detectLinuxDistro() *DistroInfo {
file, err := os.Open("/etc/os-release")
if err != nil {
return nil
}
defer func() { _ = file.Close() }()
distro := &DistroInfo{}
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
switch {
case strings.HasPrefix(line, "ID="):
distro.ID = strings.Trim(strings.TrimPrefix(line, "ID="), "\"")
case strings.HasPrefix(line, "VERSION_ID="):
distro.VersionID = strings.Trim(strings.TrimPrefix(line, "VERSION_ID="), "\"")
case strings.HasPrefix(line, "PRETTY_NAME="):
distro.PrettyName = strings.Trim(strings.TrimPrefix(line, "PRETTY_NAME="), "\"")
}
}
if distro.ID == "" {
return nil
}
return distro
}
// generateSuggestedLabels creates industry-standard labels based on capabilities
func generateSuggestedLabels(caps *RunnerCapabilities) []string {
labels := []string{}
seen := make(map[string]bool)
addLabel := func(label string) {
if label != "" && !seen[label] {
seen[label] = true
labels = append(labels, label)
}
}
// OS labels
switch caps.OS {
case "linux":
addLabel("linux")
addLabel("linux-latest")
case "windows":
addLabel("windows")
addLabel("windows-latest")
case "darwin":
addLabel("macos")
addLabel("macos-latest")
}
// Distro labels (Linux only)
if caps.Distro != nil && caps.Distro.ID != "" {
distro := strings.ToLower(caps.Distro.ID)
addLabel(distro)
addLabel(distro + "-latest")
}
// Xcode/iOS labels (macOS only)
if caps.Xcode != nil {
addLabel("xcode")
// Check for SDKs
for _, sdk := range caps.Xcode.SDKs {
sdkLower := strings.ToLower(sdk)
if strings.Contains(sdkLower, "ios") {
addLabel("ios")
}
if strings.Contains(sdkLower, "visionos") || strings.Contains(sdkLower, "xros") {
addLabel("visionos")
}
if strings.Contains(sdkLower, "watchos") {
addLabel("watchos")
}
if strings.Contains(sdkLower, "tvos") {
addLabel("tvos")
}
}
// If simulators available, add simulator label
if len(caps.Xcode.Simulators) > 0 {
addLabel("ios-simulator")
}
}
// Tool-based labels
if _, ok := caps.Tools["dotnet"]; ok {
addLabel("dotnet")
}
if _, ok := caps.Tools["java"]; ok {
addLabel("java")
}
if _, ok := caps.Tools["node"]; ok {
addLabel("node")
}
// Build tool labels
for _, tool := range caps.BuildTools {
switch tool {
case "msbuild":
addLabel("msbuild")
case "visual-studio":
addLabel("vs2022") // or detect actual version
case "inno-setup":
addLabel("inno-setup")
case "nsis":
addLabel("nsis")
}
}
return labels
}
// ToJSON converts capabilities to JSON string for transmission
func (c *RunnerCapabilities) ToJSON() string {
data, err := json.Marshal(c)
@@ -157,7 +385,7 @@ func detectDocker(ctx context.Context, dockerHost string) (bool, string) {
if err != nil {
return false, ""
}
defer cli.Close()
defer func() { _ = cli.Close() }()
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
@@ -193,7 +421,7 @@ func detectDockerCompose(ctx context.Context) bool {
return false
}
func detectTools(ctx context.Context, cap *RunnerCapabilities) {
func detectTools(ctx context.Context, caps *RunnerCapabilities) {
toolDetectors := map[string]func(context.Context) []string{
"node": detectNodeVersions,
"go": detectGoVersions,
@@ -201,11 +429,254 @@ func detectTools(ctx context.Context, cap *RunnerCapabilities) {
"java": detectJavaVersions,
"dotnet": detectDotnetVersions,
"rust": detectRustVersions,
"ruby": detectRubyVersions,
"php": detectPHPVersions,
"swift": detectSwiftVersions,
"kotlin": detectKotlinVersions,
"flutter": detectFlutterVersions,
"dart": detectDartVersions,
"powershell": detectPowerShellVersions,
}
for tool, detector := range toolDetectors {
if versions := detector(ctx); len(versions) > 0 {
cap.Tools[tool] = versions
caps.Tools[tool] = versions
}
}
// Detect additional tools that just need presence check
simpleTools := map[string]string{
"git": "git",
"cmake": "cmake",
"make": "make",
"ninja": "ninja",
"gradle": "gradle",
"maven": "mvn",
"npm": "npm",
"yarn": "yarn",
"pnpm": "pnpm",
"cargo": "cargo",
"pip": "pip3",
}
for name, cmd := range simpleTools {
if v := detectSimpleToolVersion(ctx, cmd); v != "" {
caps.Tools[name] = []string{v}
}
}
}
func detectBuildTools(ctx context.Context, caps *RunnerCapabilities) {
switch runtime.GOOS {
case "windows":
detectWindowsBuildTools(ctx, caps)
case "darwin":
detectMacOSBuildTools(caps)
case "linux":
detectLinuxBuildTools(caps)
}
}
func detectWindowsBuildTools(ctx context.Context, caps *RunnerCapabilities) {
// Check for Visual Studio via vswhere
vswherePaths := []string{
`C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe`,
`C:\Program Files\Microsoft Visual Studio\Installer\vswhere.exe`,
}
for _, vswhere := range vswherePaths {
if _, err := os.Stat(vswhere); err == nil {
cmd := exec.CommandContext(ctx, vswhere, "-latest", "-property", "displayName")
if output, err := cmd.Output(); err == nil && len(output) > 0 {
caps.BuildTools = append(caps.BuildTools, "visual-studio")
break
}
}
}
// Check for MSBuild
msbuildPaths := []string{
`C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Current\Bin\MSBuild.exe`,
`C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Current\Bin\MSBuild.exe`,
`C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\MSBuild.exe`,
`C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe`,
`C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Current\Bin\MSBuild.exe`,
`C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\MSBuild.exe`,
}
for _, msbuild := range msbuildPaths {
if _, err := os.Stat(msbuild); err == nil {
caps.BuildTools = append(caps.BuildTools, "msbuild")
break
}
}
// Check for Inno Setup
innoSetupPaths := []string{
`C:\Program Files (x86)\Inno Setup 6\ISCC.exe`,
`C:\Program Files\Inno Setup 6\ISCC.exe`,
`C:\Program Files (x86)\Inno Setup 5\ISCC.exe`,
`C:\Program Files\Inno Setup 5\ISCC.exe`,
}
for _, iscc := range innoSetupPaths {
if _, err := os.Stat(iscc); err == nil {
caps.BuildTools = append(caps.BuildTools, "inno-setup")
break
}
}
// Also check PATH
if _, err := exec.LookPath("iscc"); err == nil {
if !contains(caps.BuildTools, "inno-setup") {
caps.BuildTools = append(caps.BuildTools, "inno-setup")
}
}
// Check for NSIS
nsisPaths := []string{
`C:\Program Files (x86)\NSIS\makensis.exe`,
`C:\Program Files\NSIS\makensis.exe`,
}
for _, nsis := range nsisPaths {
if _, err := os.Stat(nsis); err == nil {
caps.BuildTools = append(caps.BuildTools, "nsis")
break
}
}
if _, err := exec.LookPath("makensis"); err == nil {
if !contains(caps.BuildTools, "nsis") {
caps.BuildTools = append(caps.BuildTools, "nsis")
}
}
// Check for WiX Toolset
wixPaths := []string{
`C:\Program Files (x86)\WiX Toolset v3.11\bin\candle.exe`,
`C:\Program Files (x86)\WiX Toolset v3.14\bin\candle.exe`,
}
for _, wix := range wixPaths {
if _, err := os.Stat(wix); err == nil {
caps.BuildTools = append(caps.BuildTools, "wix")
break
}
}
// Check for signtool (Windows SDK)
signtoolPaths, _ := filepath.Glob(`C:\Program Files (x86)\Windows Kits\10\bin\*\x64\signtool.exe`)
if len(signtoolPaths) > 0 {
caps.BuildTools = append(caps.BuildTools, "signtool")
}
}
func detectMacOSBuildTools(caps *RunnerCapabilities) {
// Check for xcpretty
if _, err := exec.LookPath("xcpretty"); err == nil {
caps.BuildTools = append(caps.BuildTools, "xcpretty")
}
// Check for fastlane
if _, err := exec.LookPath("fastlane"); err == nil {
caps.BuildTools = append(caps.BuildTools, "fastlane")
}
// Check for CocoaPods
if _, err := exec.LookPath("pod"); err == nil {
caps.BuildTools = append(caps.BuildTools, "cocoapods")
}
// Check for Carthage
if _, err := exec.LookPath("carthage"); err == nil {
caps.BuildTools = append(caps.BuildTools, "carthage")
}
// Check for SwiftLint
if _, err := exec.LookPath("swiftlint"); err == nil {
caps.BuildTools = append(caps.BuildTools, "swiftlint")
}
// Check for create-dmg or similar
if _, err := exec.LookPath("create-dmg"); err == nil {
caps.BuildTools = append(caps.BuildTools, "create-dmg")
}
// Check for Packages (packagesbuild)
if _, err := exec.LookPath("packagesbuild"); err == nil {
caps.BuildTools = append(caps.BuildTools, "packages")
}
// Check for pkgbuild (built-in)
if _, err := exec.LookPath("pkgbuild"); err == nil {
caps.BuildTools = append(caps.BuildTools, "pkgbuild")
}
// Check for codesign (built-in)
if _, err := exec.LookPath("codesign"); err == nil {
caps.BuildTools = append(caps.BuildTools, "codesign")
}
// Check for notarytool (built-in with Xcode)
if _, err := exec.LookPath("notarytool"); err == nil {
caps.BuildTools = append(caps.BuildTools, "notarytool")
}
}
func detectLinuxBuildTools(caps *RunnerCapabilities) {
// Check for common Linux build tools
tools := []string{
"gcc", "g++", "clang", "clang++",
"autoconf", "automake", "libtool",
"pkg-config", "meson",
"dpkg-deb", "rpmbuild", "fpm",
"appimage-builder", "linuxdeploy",
}
for _, tool := range tools {
if _, err := exec.LookPath(tool); err == nil {
caps.BuildTools = append(caps.BuildTools, tool)
}
}
}
func detectPackageManagers(_ context.Context, caps *RunnerCapabilities) {
switch runtime.GOOS {
case "windows":
if _, err := exec.LookPath("choco"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "chocolatey")
}
if _, err := exec.LookPath("scoop"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "scoop")
}
if _, err := exec.LookPath("winget"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "winget")
}
case "darwin":
if _, err := exec.LookPath("brew"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "homebrew")
}
if _, err := exec.LookPath("port"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "macports")
}
case "linux":
if _, err := exec.LookPath("apt"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "apt")
}
if _, err := exec.LookPath("yum"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "yum")
}
if _, err := exec.LookPath("dnf"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "dnf")
}
if _, err := exec.LookPath("pacman"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "pacman")
}
if _, err := exec.LookPath("zypper"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "zypper")
}
if _, err := exec.LookPath("apk"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "apk")
}
if _, err := exec.LookPath("snap"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "snap")
}
if _, err := exec.LookPath("flatpak"); err == nil {
caps.PackageManagers = append(caps.PackageManagers, "flatpak")
}
}
}
@@ -228,16 +699,8 @@ func detectPythonVersions(ctx context.Context) []string {
// Also try python
if v := detectToolVersion(ctx, "python", "--version", "Python "); len(v) > 0 {
// Avoid duplicates
for _, ver := range v {
found := false
for _, existing := range versions {
if existing == ver {
found = true
break
}
}
if !found {
if !contains(versions, ver) {
versions = append(versions, ver)
}
}
@@ -253,20 +716,17 @@ func detectJavaVersions(ctx context.Context) []string {
return nil
}
// Java version output goes to stderr and looks like: openjdk version "17.0.1" or java version "1.8.0_301"
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if strings.Contains(line, "version") {
// Extract version from quotes
start := strings.Index(line, "\"")
end := strings.LastIndex(line, "\"")
if start != -1 && end > start {
version := line[start+1 : end]
// Simplify version (e.g., "17.0.1" -> "17")
parts := strings.Split(version, ".")
if len(parts) > 0 {
if parts[0] == "1" && len(parts) > 1 {
return []string{parts[1]} // Java 8 style: 1.8 -> 8
return []string{parts[1]}
}
return []string{parts[0]}
}
@@ -291,21 +751,11 @@ func detectDotnetVersions(ctx context.Context) []string {
if line == "" {
continue
}
// Format: "8.0.100 [/path/to/sdk]"
parts := strings.Split(line, " ")
if len(parts) > 0 {
version := parts[0]
// Simplify to major version
major := strings.Split(version, ".")[0]
// Avoid duplicates
found := false
for _, v := range versions {
if v == major {
found = true
break
}
}
if !found {
if !contains(versions, major) {
versions = append(versions, major)
}
}
@@ -318,6 +768,97 @@ func detectRustVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "rustc", "--version", "rustc ")
}
func detectRubyVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "ruby", "--version", "ruby ")
}
func detectPHPVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "php", "--version", "PHP ")
}
func detectSwiftVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "swift", "--version", "Swift version ")
}
func detectKotlinVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "kotlin", "-version", "Kotlin version ")
}
func detectFlutterVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "flutter", "--version", "Flutter ")
}
func detectDartVersions(ctx context.Context) []string {
return detectToolVersion(ctx, "dart", "--version", "Dart SDK version: ")
}
func detectPowerShellVersions(ctx context.Context) []string {
versions := []string{}
// Check for pwsh (PowerShell Core / PowerShell 7+)
if v := detectPwshVersion(ctx, "pwsh"); v != "" {
versions = append(versions, "pwsh:"+v)
}
// Check for powershell (Windows PowerShell 5.x)
if runtime.GOOS == "windows" {
if v := detectPwshVersion(ctx, "powershell"); v != "" {
versions = append(versions, "powershell:"+v)
}
}
return versions
}
func detectPwshVersion(ctx context.Context, cmd string) string {
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// Use -Command to get version (same command works for both pwsh and powershell)
c := exec.CommandContext(timeoutCtx, cmd, "-Command", "$PSVersionTable.PSVersion.ToString()")
output, err := c.Output()
if err != nil {
return ""
}
version := strings.TrimSpace(string(output))
// Return major.minor
parts := strings.Split(version, ".")
if len(parts) >= 2 {
return parts[0] + "." + parts[1]
}
return version
}
func detectSimpleToolVersion(ctx context.Context, cmd string) string {
if _, err := exec.LookPath(cmd); err != nil {
return ""
}
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
c := exec.CommandContext(timeoutCtx, cmd, "--version")
output, err := c.Output()
if err != nil {
// Try without --version for tools that don't support it
return "installed"
}
line := strings.TrimSpace(strings.Split(string(output), "\n")[0])
// Extract version number if possible
parts := strings.Fields(line)
for _, part := range parts {
// Look for something that looks like a version
if len(part) > 0 && (part[0] >= '0' && part[0] <= '9' || part[0] == 'v') {
return strings.TrimPrefix(part, "v")
}
}
return "installed"
}
func detectToolVersion(ctx context.Context, cmd string, args string, prefix string) []string {
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
@@ -335,13 +876,10 @@ func detectToolVersion(ctx context.Context, cmd string, args string, prefix stri
}
}
// Get just the version number
parts := strings.Fields(line)
if len(parts) > 0 {
version := parts[0]
// Clean up version string
version = strings.TrimPrefix(version, "v")
// Return major.minor or just major
vparts := strings.Split(version, ".")
if len(vparts) >= 2 {
return []string{vparts[0] + "." + vparts[1]}
@@ -351,3 +889,166 @@ func detectToolVersion(ctx context.Context, cmd string, args string, prefix stri
return nil
}
func contains(slice []string, item string) bool {
for _, s := range slice {
if s == item {
return true
}
}
return false
}
// detectCPULoad detects the current CPU load
func detectCPULoad() *CPUInfo {
numCPU := runtime.NumCPU()
info := &CPUInfo{
NumCPU: numCPU,
}
switch runtime.GOOS {
case "linux":
// Check if running in a container (LXC/Docker)
// Containers share /proc/loadavg with host, giving inaccurate readings
inContainer := isInContainer()
if inContainer {
// Try to get CPU usage from cgroups (more accurate for containers)
if cgroupCPU := getContainerCPUUsage(); cgroupCPU >= 0 {
info.LoadPercent = cgroupCPU
info.LoadAvg1m = cgroupCPU * float64(numCPU) / 100.0
return info
}
// If cgroup reading failed, report 0 - better than host's load
info.LoadPercent = 0
info.LoadAvg1m = 0
return info
}
// Not in container - use traditional /proc/loadavg
data, err := os.ReadFile("/proc/loadavg")
if err != nil {
return info
}
parts := strings.Fields(string(data))
if len(parts) >= 3 {
if load, err := parseFloat(parts[0]); err == nil {
info.LoadAvg1m = load
}
if load, err := parseFloat(parts[1]); err == nil {
info.LoadAvg5m = load
}
if load, err := parseFloat(parts[2]); err == nil {
info.LoadAvg15m = load
}
}
case "darwin":
// Use sysctl on macOS
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "sysctl", "-n", "vm.loadavg")
output, err := cmd.Output()
if err == nil {
// Output format: "{ 1.23 4.56 7.89 }"
line := strings.Trim(string(output), "{ }\n")
parts := strings.Fields(line)
if len(parts) >= 3 {
if load, err := parseFloat(parts[0]); err == nil {
info.LoadAvg1m = load
}
if load, err := parseFloat(parts[1]); err == nil {
info.LoadAvg5m = load
}
if load, err := parseFloat(parts[2]); err == nil {
info.LoadAvg15m = load
}
}
}
case "windows":
// Windows doesn't have load average, use PowerShell to get CPU usage
// wmic is deprecated, use Get-CimInstance instead
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "powershell", "-NoProfile", "-Command",
"(Get-CimInstance Win32_Processor | Measure-Object -Property LoadPercentage -Average).Average")
output, err := cmd.Output()
if err == nil {
line := strings.TrimSpace(string(output))
if load, err := parseFloat(line); err == nil {
info.LoadPercent = load
info.LoadAvg1m = load * float64(numCPU) / 100.0
return info
}
}
}
// Calculate load percent (load_avg_1m / num_cpu * 100)
if info.LoadAvg1m > 0 && numCPU > 0 {
info.LoadPercent = (info.LoadAvg1m / float64(numCPU)) * 100.0
}
return info
}
// isInContainer checks if we're running inside a container (LXC/Docker)
func isInContainer() bool {
// Check for Docker
if _, err := os.Stat("/.dockerenv"); err == nil {
return true
}
// Check PID 1's environment for container type (works for LXC on Proxmox)
if data, err := os.ReadFile("/proc/1/environ"); err == nil {
// environ uses null bytes as separators
content := string(data)
if strings.Contains(content, "container=lxc") || strings.Contains(content, "container=docker") {
return true
}
}
// Check for LXC/Docker in cgroup path (cgroup v1)
if data, err := os.ReadFile("/proc/1/cgroup"); err == nil {
content := string(data)
if strings.Contains(content, "/lxc/") || strings.Contains(content, "/docker/") {
return true
}
}
// Check for container environment variable in current process
if os.Getenv("container") != "" {
return true
}
// Check for systemd-nspawn or other containers
if _, err := os.Stat("/run/.containerenv"); err == nil {
return true
}
return false
}
// getContainerCPUUsage tries to get CPU usage from cgroups
// Returns -1 if unable to determine
func getContainerCPUUsage() float64 {
// Try cgroup v2 first
if data, err := os.ReadFile("/sys/fs/cgroup/cpu.stat"); err == nil {
lines := strings.Split(string(data), "\n")
for _, line := range lines {
if strings.HasPrefix(line, "usage_usec ") {
// This gives total CPU time, not current usage
// For now, we can't easily calculate percentage without storing previous value
// Return -1 to fall back to reporting 0
break
}
}
}
// Note: Reading /proc/self/stat could give us utime and stime (fields 14 and 15),
// but these are cumulative values, not instantaneous. For containers, we report 0
// rather than misleading host data.
return -1 // Unable to determine - caller should handle
}
// parseFloat parses a string to float64
func parseFloat(s string) (float64, error) {
s = strings.TrimSpace(s)
var f float64
err := json.Unmarshal([]byte(s), &f)
return f, err
}

View File

@@ -0,0 +1,43 @@
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
//go:build unix
package envcheck
import (
"golang.org/x/sys/unix"
)
// detectDiskSpace detects disk space on the specified path's filesystem (Unix version)
// If path is empty, defaults to "/"
func detectDiskSpace(path string) *DiskInfo {
if path == "" {
path = "/"
}
var stat unix.Statfs_t
err := unix.Statfs(path, &stat)
if err != nil {
// Fallback to root if the path doesn't exist
err = unix.Statfs("/", &stat)
if err != nil {
return nil
}
path = "/"
}
total := stat.Blocks * uint64(stat.Bsize)
free := stat.Bavail * uint64(stat.Bsize)
used := total - free
usedPercent := float64(used) / float64(total) * 100
return &DiskInfo{
Path: path,
Total: total,
Free: free,
Used: used,
UsedPercent: usedPercent,
}
}

View File

@@ -0,0 +1,57 @@
// Copyright 2026 MarketAlly. All rights reserved.
// SPDX-License-Identifier: MIT
//go:build windows
package envcheck
import (
"path/filepath"
"golang.org/x/sys/windows"
)
// detectDiskSpace detects disk space on the specified path's drive (Windows version)
// If path is empty, defaults to "C:\"
func detectDiskSpace(path string) *DiskInfo {
if path == "" {
path = "C:\\"
}
// Resolve to absolute path
absPath, err := filepath.Abs(path)
if err != nil {
absPath = "C:\\"
}
// Extract drive letter (e.g., "D:\" from "D:\builds\runner")
drivePath := filepath.VolumeName(absPath) + "\\"
if drivePath == "\\" {
drivePath = "C:\\"
}
var freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes uint64
pathPtr := windows.StringToUTF16Ptr(drivePath)
err = windows.GetDiskFreeSpaceEx(pathPtr, &freeBytesAvailable, &totalNumberOfBytes, &totalNumberOfFreeBytes)
if err != nil {
// Fallback to C: drive
pathPtr = windows.StringToUTF16Ptr("C:\\")
err = windows.GetDiskFreeSpaceEx(pathPtr, &freeBytesAvailable, &totalNumberOfBytes, &totalNumberOfFreeBytes)
if err != nil {
return nil
}
drivePath = "C:\\"
}
used := totalNumberOfBytes - totalNumberOfFreeBytes
usedPercent := float64(used) / float64(totalNumberOfBytes) * 100
return &DiskInfo{
Path: drivePath,
Total: totalNumberOfBytes,
Free: totalNumberOfFreeBytes,
Used: used,
UsedPercent: usedPercent,
}
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/docker/client"
)
// CheckIfDockerRunning verifies that the Docker daemon is running and accessible.
func CheckIfDockerRunning(ctx context.Context, configDockerHost string) error {
opts := []client.Opt{
client.FromEnv,
@@ -23,7 +24,7 @@ func CheckIfDockerRunning(ctx context.Context, configDockerHost string) error {
if err != nil {
return err
}
defer cli.Close()
defer func() { _ = cli.Close() }()
_, err = cli.Ping(ctx)
if err != nil {

View File

@@ -1,6 +1,7 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package labels provides utilities for parsing and managing runner labels.
package labels
import (
@@ -8,17 +9,20 @@ import (
"strings"
)
// Label scheme constants define the execution environments.
const (
SchemeHost = "host"
SchemeDocker = "docker"
)
// Label represents a parsed runner label with name, schema, and optional argument.
type Label struct {
Name string
Schema string
Arg string
}
// Parse parses a label string in the format "name:schema:arg" and returns a Label.
func Parse(str string) (*Label, error) {
splits := strings.SplitN(str, ":", 3)
label := &Label{
@@ -38,8 +42,10 @@ func Parse(str string) (*Label, error) {
return label, nil
}
// Labels is a slice of Label pointers.
type Labels []*Label
// RequireDocker returns true if any label uses the docker schema.
func (l Labels) RequireDocker() bool {
for _, label := range l {
if label.Schema == SchemeDocker {
@@ -49,6 +55,7 @@ func (l Labels) RequireDocker() bool {
return false
}
// PickPlatform selects the appropriate platform based on the runsOn requirements.
func (l Labels) PickPlatform(runsOn []string) string {
platforms := make(map[string]string, len(l))
for _, label := range l {
@@ -82,6 +89,7 @@ func (l Labels) PickPlatform(runsOn []string) string {
return "docker.gitea.com/runner-images:ubuntu-latest"
}
// Names returns the names of all labels.
func (l Labels) Names() []string {
names := make([]string, 0, len(l))
for _, label := range l {
@@ -90,6 +98,7 @@ func (l Labels) Names() []string {
return names
}
// ToStrings converts labels back to their string representation.
func (l Labels) ToStrings() []string {
ls := make([]string, 0, len(l))
for _, label := range l {

View File

@@ -1,6 +1,7 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package report provides task reporting functionality for communicating with the server.
package report
import (
@@ -18,9 +19,10 @@ import (
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
"gitea.com/gitea/act_runner/internal/pkg/client"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client"
)
// Reporter handles logging and state reporting for running tasks.
type Reporter struct {
ctx context.Context
cancel context.CancelFunc
@@ -42,6 +44,7 @@ type Reporter struct {
stopCommandEndToken string
}
// NewReporter creates a new Reporter for the given task.
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task) *Reporter {
var oldnew []string
if v := task.Context.Fields["token"].GetStringValue(); v != "" {
@@ -72,6 +75,7 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
return rv
}
// ResetSteps initializes the step states with the given number of steps.
func (r *Reporter) ResetSteps(l int) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -82,6 +86,7 @@ func (r *Reporter) ResetSteps(l int) {
}
}
// Levels returns all log levels that this hook should fire for.
func (r *Reporter) Levels() []log.Level {
return log.AllLevels
}
@@ -93,6 +98,7 @@ func appendIfNotNil[T any](s []*T, v *T) []*T {
return s
}
// Fire processes a log entry and updates the task state accordingly.
func (r *Reporter) Fire(entry *log.Entry) error {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -175,6 +181,7 @@ func (r *Reporter) Fire(entry *log.Entry) error {
return nil
}
// RunDaemon starts the periodic reporting of logs and state.
func (r *Reporter) RunDaemon() {
if r.closed {
return
@@ -189,6 +196,7 @@ func (r *Reporter) RunDaemon() {
time.AfterFunc(time.Second, r.RunDaemon)
}
// Logf adds a formatted log message to the report.
func (r *Reporter) Logf(format string, a ...interface{}) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -205,6 +213,7 @@ func (r *Reporter) logf(format string, a ...interface{}) {
}
}
// SetOutputs stores the job outputs to be reported to the server.
func (r *Reporter) SetOutputs(outputs map[string]string) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -225,6 +234,7 @@ func (r *Reporter) SetOutputs(outputs map[string]string) {
}
}
// Close finalizes the report and sends any remaining logs and state.
func (r *Reporter) Close(lastWords string) error {
r.closed = true
@@ -260,6 +270,7 @@ func (r *Reporter) Close(lastWords string) error {
}, retry.Context(r.ctx))
}
// ReportLog sends accumulated log rows to the server.
func (r *Reporter) ReportLog(noMore bool) error {
r.clientM.Lock()
defer r.clientM.Unlock()
@@ -295,6 +306,7 @@ func (r *Reporter) ReportLog(noMore bool) error {
return nil
}
// ReportState sends the current task state to the server.
func (r *Reporter) ReportState() error {
r.clientM.Lock()
defer r.clientM.Unlock()
@@ -373,7 +385,7 @@ func (r *Reporter) parseResult(result interface{}) (runnerv1.Result, bool) {
var cmdRegex = regexp.MustCompile(`^::([^ :]+)( .*)?::(.*)$`)
func (r *Reporter) handleCommand(originalContent, command, parameters, value string) *string {
func (r *Reporter) handleCommand(originalContent, command, _ /* parameters */, value string) *string {
if r.stopCommandEndToken != "" && command != r.stopCommandEndToken {
return &originalContent
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"gitea.com/gitea/act_runner/internal/pkg/client/mocks"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/client/mocks"
)
func TestReporter_parseLogRow(t *testing.T) {

View File

@@ -1,11 +1,13 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package ver provides version information for the runner.
package ver
// go build -ldflags "-X gitea.com/gitea/act_runner/internal/pkg/ver.version=1.2.3"
// go build -ldflags "-X git.marketally.com/gitcaddy/gitcaddy-runner/internal/pkg/ver.version=1.2.3"
var version = "dev"
// Version returns the current runner version.
func Version() string {
return version
}

View File

@@ -1,6 +1,7 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// GitCaddy Runner is a CI/CD runner for Gitea Actions.
package main
import (
@@ -8,7 +9,7 @@ import (
"os/signal"
"syscall"
"gitea.com/gitea/act_runner/internal/app/cmd"
"git.marketally.com/gitcaddy/gitcaddy-runner/internal/app/cmd"
)
func main() {