RxInferServer – Remote Bayesian Inference from Python via Julia

5 points by bvdmitri 2 days ago

We’ve just released a Python SDK for RxInferServer, enabling Python developers to perform remote Bayesian inference using models hosted on RxInferServer.

What is RxInfer? https://reactivebayes.github.io/RxInfer.jl/

RxInfer.jl is a Julia package designed for reactive message passing and probabilistic programming. It facilitates real-time Bayesian inference in complex models, supporting both exact and variational inference algorithms.

Key Features: - Remote Model Execution: Call RxInfer models hosted on RxInferServer directly from Python. - OpenAPI Specification: RxInferServer exposes an OpenAPI interface, allowing for seamless integration and client generation. - Julia Interoperability: With minor modifications, RxInferServer can execute arbitrary Julia code, not limited to RxInfer models.

Example Notebook: State-Space Model Example https://lazydynamics.github.io/RxInferClient.py/examples/state-space-model/

Server Documentation https://server.rxinfer.com

Python SDK Repository https://github.com/lazydynamics/RxInferClient.py

We welcome feedback from the developer community, especially those interested in integrating Bayesian inference into Python workflows or exploring cross-language model execution.

shoo 2 days ago

I'd be interested to know of applications where RxInfer (or similar approximate variational inference approaches) has been demonstrated to perform much better than competing Bayesian inference approaches -- in the sense of a combined performance, accuracy & maintainability/stability engineering tradeoff -- and also of applications where the approximations used by RxInfer introduce too much approximation error to be useful and other methods are preferred. Examples of commercialised / "industrial scale" applications that are a great fit for the approximations used by RxInfer (and also those applications that are likely to be a poor fit) would be especially convincing!

I'm also curious to know if, once a reasonable way to model a problem with RxInfer is found, can better results (either speeding up evaluation or reducing approximation error) be obtained by throwing more hardware at it (CPU, ram, GPU etc)? Or in practice does inference speed tend not to be a practical bottleneck, and if the bottleneck is that approximation error is too high then the remedy is reformulating the problem / switching to another method (i.e. requires R&D) / gathering more data to better constrain model fits.