python3-klepto
Persistent caching to memory, disk or database
klepto extends Python's lru_cache to utilise different keymaps and
alternate caching algorithms. This package also has archiving
capabilities for longer-term storage. It uses a simple
dictionary-style interface for all caches and archives, and all
caches can be applied to any Python function as a decorator.
python3-dict2css
μ-library for transforming Python Dicts to CSS (Python 3)
This package provides an API similar to the json and toml modules, with dump
and load functions.
rdflib-endpoint
SPARQL endpoint for RDFLib - command line tool
rdflib-endpoint is a SPARQL endpoint based on RDFLib to easily
serve RDF files locally, machine learning models,
or any other logic implemented in Python via custom SPARQL functions.
python3-mystic
Constrained nonlinear optimization
The mystic framework provides a collection of optimization algorithms
and tools that allows the user to more robustly (and easily) solve
hard optimization problems for machine learning, uncertainty
quantification and AI. mystic gives the user fine-grained power to
both monitor and steer optimizations as the fit processes are
running. Users can customize optimizer stop conditions, where both
compound and user-provided conditions may be used. Optimizers can
save state, can be reconfigured dynamically, and can be restarted
from a saved solver or from a results file. All solvers can also
leverage parallel computing, either within each iteration or as an
ensemble of solvers.
python3-bleak
Bluetooth Low Energy platform agnostic client
Bleak is an acronym for Bluetooth Low Energy platform Agnostic Klient.
lua-resty-lrucache
Simple LRU cache for the ngx_lua module
The LRU cache resides completely in the Lua VM and is subject to Lua GC.
As such, do not expect it to get shared across the OS process boundary.
The upside is that you can cache arbitrary complex Lua values (such as deep
nested Lua tables) without the overhead of serialization (as with ngx_lua's
shared dictionary API). The downside is that your cache is always limited to
the current OS process (i.e. the current Nginx worker process). It does not
really make much sense to use this library in the context of init_by_lua
because the cache will not get shared by any of the worker processes (unless
you just want to "warm up" the cache with predefined items which will
get inherited by the workers via fork()).