A modern language for data science
Keel combines Python's simplicity with Rust's performance. Strong types, parallel execution on CPU and GPU, and a curated package ecosystem—from browser to bare metal.
Why Keel?
Type-Safe at Scale
Catch errors at compile time with full type inference. Immutable by default, designed for large teams and complex codebases.
CPU & GPU ParallelPlanned
No GIL limitations. Native parallelization across all CPU cores, with CUDA and HIP support for GPU acceleration.
Universal Runtime
From browsers via WebAssembly to IoT devices and edge networks. One language for every platform.
Quality EcosystemPlanned
State-of-the-art package manager with vetted, high-quality modules. Multiple versions can coexist seamlessly.
Elegant & Expressive
First-class functions, automatic currying, pipe operators, and pattern matching with exhaustiveness checking.
Built-in Tooling
LSP support, interactive REPL, inline documentation, and helpful error messages with fuzzy suggestions.
Code Examples
-- Automatic data lineage tracking
import DataFrame
let sales =
DataFrame.fromRecords
[ { product = "Laptop", revenue = 1200 }
, { product = "Phone", revenue = 800 }
]
let filtered = DataFrame.filterGt "revenue" 500 sales
let withMargin =
DataFrame.withColumn "margin" [240, 160] filtered
DataFrame.lineage withMarginBuilt For Real Work
From data exploration to production deployment, Keel handles it all.
Data Analysis
Explore datasets with expressive pipelines. Strong typing catches errors early, while lazy evaluation handles data larger than memory.
Machine Learning
Build and train models with automatic differentiation. GPU acceleration via CUDA/HIP for heavy computation.
Scientific Computing
Precise numeric types and SIMD parallelism for simulations, numerical analysis, and research applications.
Web & WASM
Compile to WebAssembly for browser-based tools, interactive visualizations, and edge computing.
How Keel Compares
See how Keel stacks up against popular languages for data science.
| Feature | Keel | Python | R | Julia |
|---|---|---|---|---|
| Type System | Strong, inferred | Dynamic | Dynamic | Dynamic (optional types) |
| Performance | Native speed | Interpreted (slow) | Interpreted (vectorized) | JIT compiled |
| Parallelism | Planned | GIL limited | Limited | Built-in |
| GPU Support | Planned | Via libraries | Via libraries | Built-in |
| Learning Curve | Moderate | Easy | Moderate | Moderate |
| Ecosystem | Planned | Vast | Vast (statistics) | Moderate |
| Memory Safety | Guaranteed | Runtime checks | Runtime checks | Runtime checks |
Installation
# Run directly without installing
nix run git+https://codeberg.org/Keel/keel-cli
# Or build the package
nix build git+https://codeberg.org/Keel/keel-cli
# Enter a development shell
nix develop git+https://codeberg.org/Keel/keel-cliPrerequisites: Rust 1.70+ and Cargo (for building from source) or Nix with flakes enabled (for Nix installation)
Roadmap
What's coming next for Keel.
Core Language
CompletedType system, pattern matching, modules, REPL
Standard Library
In ProgressCollections, I/O, networking, concurrency primitives
Package Manager
In ProgressDependency resolution, versioning, registry
GPU Backend
PlannedCUDA and HIP support for parallel computation
DataFrame Library
CompletedPolars-backed columnar data with window functions, schema validation, and metadata
ML Framework
PlannedAutomatic differentiation and neural networks
Frequently Asked Questions
Keel is currently in alpha (v0.1.0). The core language is stable, but the ecosystem is still growing. It's great for experimentation and side projects, but we recommend caution for production workloads.
Keel offers Python-like simplicity with Rust-like performance. The type system catches errors at compile time rather than runtime, and there's no GIL limiting parallelism. However, Python's ecosystem is vastly larger.
Not directly. Keel has its own ecosystem. However, you can interop through FFI for C libraries, and we're working on Python bindings for embedding Keel in Python projects.
Keel compiles to native code, typically matching or exceeding Rust performance. For numeric workloads, SIMD optimizations are automatic. GPU acceleration is planned for compute-intensive tasks.
Primarily functional with immutable-by-default values, first-class functions, and pattern matching. However, you can opt into mutable state when needed for performance-critical code.
Check out our Codeberg repository! We welcome contributions of all kinds: code, documentation, examples, and bug reports. Join the community chat to get started.