--- title: Performance Optimisation description: Optimise your Wails application for maximum performance sidebar: order: 9 --- ## Overview Optimise your Wails application for speed, memory efficiency, and responsiveness. ## Frontend Optimisation ### Bundle Size ```javascript // vite.config.js export default { build: { rollupOptions: { output: { manualChunks: { vendor: ['react', 'react-dom'], }, }, }, minify: 'terser', terserOptions: { compress: { drop_console: true, }, }, }, } ``` ### Code Splitting ```javascript // Lazy load components const Settings = lazy(() => import('./Settings')) function App() { return ( }> ) } ``` ### Asset Optimisation ```javascript // Optimise images import { defineConfig } from 'vite' import imagemin from 'vite-plugin-imagemin' export default defineConfig({ plugins: [ imagemin({ gifsicle: { optimizationLevel: 3 }, optipng: { optimizationLevel: 7 }, svgo: { plugins: [{ removeViewBox: false }] }, }), ], }) ``` ## Backend Optimisation ### Efficient Bindings ```go // ❌ Bad: Return everything func (s *Service) GetAllData() []Data { return s.db.FindAll() // Could be huge } // ✅ Good: Paginate func (s *Service) GetData(page, size int) (*PagedData, error) { return s.db.FindPaged(page, size) } ``` ### Caching ```go type CachedService struct { cache *lru.Cache ttl time.Duration } func (s *CachedService) GetData(key string) (interface{}, error) { // Check cache if val, ok := s.cache.Get(key); ok { return val, nil } // Fetch and cache data, err := s.fetchData(key) if err != nil { return nil, err } s.cache.Add(key, data) return data, nil } ``` ### Goroutines for Long Operations ```go func (s *Service) ProcessLargeFile(path string) error { // Process in background go func() { result, err := s.process(path) if err != nil { s.app.Event.Emit("process-error", err.Error()) return } s.app.Event.Emit("process-complete", result) }() return nil } ``` ## Memory Optimisation ### Avoid Memory Leaks ```go // ❌ Bad: Goroutine leak func (s *Service) StartPolling() { ticker := time.NewTicker(1 * time.Second) go func() { for range ticker.C { s.poll() } }() // ticker never stopped! } // ✅ Good: Proper cleanup func (s *Service) StartPolling() { ticker := time.NewTicker(1 * time.Second) s.stopChan = make(chan bool) go func() { for { select { case <-ticker.C: s.poll() case <-s.stopChan: ticker.Stop() return } } }() } func (s *Service) StopPolling() { close(s.stopChan) } ``` ### Pool Resources ```go var bufferPool = sync.Pool{ New: func() interface{} { return new(bytes.Buffer) }, } func processData(data []byte) []byte { buf := bufferPool.Get().(*bytes.Buffer) defer bufferPool.Put(buf) buf.Reset() buf.Write(data) // Process... return buf.Bytes() } ``` ## Event Optimisation ### Debounce Events ```javascript // Debounce frequent events let debounceTimer function handleInput(value) { clearTimeout(debounceTimer) debounceTimer = setTimeout(() => { UpdateData(value) }, 300) } ``` ### Batch Updates ```go type BatchProcessor struct { items []Item mu sync.Mutex timer *time.Timer } func (b *BatchProcessor) Add(item Item) { b.mu.Lock() defer b.mu.Unlock() b.items = append(b.items, item) if b.timer == nil { b.timer = time.AfterFunc(100*time.Millisecond, b.flush) } } func (b *BatchProcessor) flush() { b.mu.Lock() items := b.items b.items = nil b.timer = nil b.mu.Unlock() // Process batch processBatch(items) } ``` ## Build Optimisation ### Binary Size ```bash # Strip debug symbols wails3 build -ldflags "-s -w" # Reduce binary size further go build -ldflags="-s -w" -trimpath ``` ### Compilation Speed ```bash # Use build cache go build -buildmode=default # Parallel compilation go build -p 8 ``` ## Profiling ### CPU Profiling ```go import "runtime/pprof" func profileCPU() { f, _ := os.Create("cpu.prof") defer f.Close() pprof.StartCPUProfile(f) defer pprof.StopCPUProfile() // Code to profile } ``` ### Memory Profiling ```go import "runtime/pprof" func profileMemory() { f, _ := os.Create("mem.prof") defer f.Close() runtime.GC() pprof.WriteHeapProfile(f) } ``` ### Analyse Profiles ```bash # View CPU profile go tool pprof cpu.prof # View memory profile go tool pprof mem.prof # Web interface go tool pprof -http=:8080 cpu.prof ``` ## Best Practices ### ✅ Do - Profile before optimising - Cache expensive operations - Use pagination for large datasets - Debounce frequent events - Pool resources - Clean up goroutines - Optimise bundle size - Use lazy loading ### ❌ Don't - Don't optimise prematurely - Don't ignore memory leaks - Don't block the main thread - Don't return huge datasets - Don't skip profiling - Don't forget cleanup ## Performance Checklist - [ ] Frontend bundle optimised - [ ] Images compressed - [ ] Code splitting implemented - [ ] Backend methods paginated - [ ] Caching implemented - [ ] Goroutines cleaned up - [ ] Events debounced - [ ] Binary size optimised - [ ] Profiling done - [ ] Memory leaks fixed ## Next Steps - [Architecture](/guides/architecture) - Application architecture patterns - [Testing](/guides/testing) - Test your application - [Building](/guides/building) - Build optimised binaries