{s}skillry

Browse Skills

21 skills available

# deadline-prep.md00
export deadline-prep
 

Generate a structured demo outline from your session's change log and git history. Reads .claude/critical_log_changes.csv and git log to produce presentation-re

productivity
Mar 1, 2026
00
# latchbio-integration.md00
export latchbio-integration
 

Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task decorators, deploy serverless workflows, LatchFile/LatchDir, Nextfl

scientific
Mar 1, 2026
00
# railway-service.md00
export railway-service
 

Check service status, rename services, change service icons, link services, or create services with Docker images. For creating services with local code, prefer

railway
Mar 1, 2026
00
# training-llms-megatron.md00
export training-llms-megatron
 

Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need

ai research
Mar 1, 2026
00
# diffdock.md00
export diffdock
 

Diffusion-based molecular docking. Predict protein-ligand binding poses from PDB/SMILES, confidence scores, virtual screening, for structure-based drug design.

scientific
Mar 1, 2026
00
# railway-deploy.md00
export railway-deploy
 

Deploy code to Railway using "railway up". Use when user wants to push code, says "railway up", "deploy", "ship", or "push". For initial setup or creating servi

railway
Mar 1, 2026
00
# miles-rl-training.md00
export miles-rl-training
 

Provides guidance for enterprise-grade RL training using miles, a production-ready fork of slime. Use when training large MoE models with FP8/INT4, needing trai

ai research
Mar 1, 2026
00
# docker-hub-automation.md00
export Docker Hub Automation
 

Automate Docker Hub operations -- manage organizations, repositories, teams, members, and webhooks via the Composio MCP integration.

engineering
Mar 1, 2026
00
# llama-cpp.md00
export llama-cpp
 

Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is una

ai research
Mar 1, 2026
00
← PrevPage 2 of 2