Overview
Linea is designed from the ground up for machine learning. It provides native support for tensors, matrices, neural networks, and GPU acceleration—no external ML frameworks needed.
Tensors and Matrices
Creating Matrices
var mat @ Matrix = [
[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]
]
Matrix Operations
var a @ Matrix = [[1.0, 2.0], [3.0, 4.0]]
var b @ Matrix = [[5.0, 6.0], [7.0, 8.0]]
// Matrix multiplication
var result @ Matrix = matmul(a, b)
Tensor Operations
var tensor @ Tensor = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]]
]
// Broadcasting and element-wise operations
var scaled @ Tensor = tensor * 2.0
Common ML Operations
Vector Dot Product
var v1 @ [float] = [1.0, 2.0, 3.0]
var v2 @ [float] = [4.0, 5.0, 6.0]
var dot @ float = dot_product(v1, v2) // 32.0
Activation Functions
var x @ float = 2.5
var relu_out @ float = relu(x) // ReLU activation
var sigmoid_out @ float = sigmoid(x) // Sigmoid
var tanh_out @ float = tanh(x) // Tanh
Softmax
var logits @ [float] = [1.0, 2.0, 3.0]
var probs @ [float] = softmax(logits)
Loss Functions
Mean Squared Error (MSE)
var predicted @ [float] = [1.2, 2.1, 2.9]
var actual @ [float] = [1.0, 2.0, 3.0]
var loss @ float = mse(predicted, actual)
Cross Entropy
var predictions @ [float] = [0.1, 0.7, 0.2]
var targets @ [float] [0.0, 1.0, 0.0]
var loss @ float = cross_entropy(predictions, targets)
GPU Acceleration
Mark functions for GPU execution with the @gpu decorator:
func matrix_multiply(a: any, b: any) -> any {
return matmul(a, b)
}
💡 Tip: Use GPU paths for large matrix operations and neural network layers. Linea automatically handles GPU/CPU selection based on device hierarchy (dGPU > iGPU > CPU).
Common Patterns
Normalize Data
var data @ [float] = [1.0, 2.0, 3.0, 4.0, 5.0]
var mean @ float = mean(data)
var std @ float = std(data)
var normalized @ [float] = (data - mean) / std
Best Practices
- Use matrices for batch operations: Faster than element-wise loops
- Mark GPU functions: Use: gpu for acceleration
- Normalize inputs: Better convergence during training
- Use appropriate dtypes: float32 for most ML, float64 for precision
- Profile your code: Find bottlenecks before optimizing