-
Notifications
You must be signed in to change notification settings - Fork 2
added Lux, DifferentialEquations, AbstractGPs tutorials #51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
PTWaade
wants to merge
14
commits into
TuringLang:main
Choose a base branch
from
PTWaade:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
208c146
Renamed miscellanoues to external packages
PTWaade e009f96
renamed metabeyesian_Turing_HMC
PTWaade 1bba1fa
fixed previous renaming
PTWaade 9c54e0f
added DifferentialEquations
PTWaade 8cf4f73
added Lux, updated DIfferentialEquations
PTWaade 4f33fcd
added AbstractGPs
PTWaade a0dcdf5
Merge branch 'main' into add_external_packages
PTWaade fa70f11
Merge pull request #1 from PTWaade/add_external_packages
PTWaade f9b4c4a
deleted previous ordinary_diffeq
PTWaade d58a87f
removed DataFrames dependency
PTWaade ef2e7fa
removed rng from data generation
PTWaade b0313dd
reduced DifferentialEquations dependency
PTWaade 6d29676
simplified names
PTWaade fbb1809
updated ODE and DDE examples to match Penelope's style
PTWaade File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
{ | ||
"julia.environmentPath": "/Users/au568658/Library/CloudStorage/OneDrive-Aarhusuniversitet/Academ/Projects/Software/ADTests.jl" | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
#= | ||
This is an implementation of using AbstractGPs.jl with Turing to model a Gaussian Process. | ||
The model is adapted from the Turing documentation: https://turinglang.org/docs/tutorials/gaussian-processes-introduction/ | ||
=# | ||
|
||
using AbstractGPs | ||
using LogExpFunctions | ||
|
||
# Load data | ||
distance = [2.,3.,4.,5.,6.] | ||
n = [1443, 694, 455, 353, 272] | ||
y = [1346, 577, 337, 208, 149] | ||
|
||
# Make Turing model | ||
@model function AbstractGPs_GP(d, n, y; jitter=1e-4) | ||
v ~ Gamma(2, 1) | ||
l ~ Gamma(4, 1) | ||
f = GP(v * with_lengthscale(SEKernel(), l)) | ||
f_latent ~ f(d, jitter) | ||
y ~ product_distribution(Binomial.(n, logistic.(f_latent))) | ||
return (fx=f(d, jitter), f_latent=f_latent, y=y) | ||
end | ||
|
||
model = AbstractGPs_GP(distance, n, y) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
#= | ||
This is an example of using DifferentialEquations.jl with Turing to model a delayed Lotka–Volterra equations (predator-prey model). | ||
The model is adapted from the Turing documentation: https://turinglang.org/docs/tutorials/bayesian-differential-equations/ | ||
=# | ||
using DelayDiffEq: DDEProblem, solve, MethodOfSteps, Tsit5 | ||
|
||
# SciMLSensitivity is needed for reverse-mode AD on differential equations | ||
import SciMLSensitivity | ||
|
||
function delay_lotka_volterra(du, u, h, p, t) | ||
α, β, γ, δ = p | ||
x, y = u | ||
du[1] = α * h(p, t - 1; idxs=1) - β * x * y | ||
du[2] = -γ * y + δ * x * y | ||
return nothing | ||
end | ||
p = (1.5, 1.0, 3.0, 1.0) | ||
u0 = [1.0; 1.0] | ||
tspan = (0.0, 10.0) | ||
h(p, t; idxs::Int) = 1.0 | ||
prob_dde = DDEProblem(delay_lotka_volterra, u0, h, tspan, p) | ||
sol_dde = solve(prob_dde; saveat=0.1) | ||
q = 1.7 | ||
ddedata = rand.(Poisson.(q .* Array(sol_dde))) | ||
|
||
@model function DifferentialEquations_DDE(data, prob) | ||
α ~ truncated(Normal(1.5, 0.2); lower=0.5, upper=2.5) | ||
β ~ truncated(Normal(1.1, 0.2); lower=0, upper=2) | ||
γ ~ truncated(Normal(3.0, 0.2); lower=1, upper=4) | ||
δ ~ truncated(Normal(1.0, 0.2); lower=0, upper=2) | ||
q ~ truncated(Normal(1.7, 0.2); lower=0, upper=3) | ||
p = [α, β, γ, δ] | ||
predicted = solve(prob, MethodOfSteps(Tsit5()); p=p, saveat=0.1, abstol=1e-6, reltol=1e-6) | ||
ϵ = 1e-5 | ||
for i in eachindex(predicted) | ||
data[:, i] ~ arraydist(Poisson.(q .* predicted[i] .+ ϵ)) | ||
end | ||
return nothing | ||
end | ||
|
||
model = DifferentialEquations_DDE(ddedata, prob_dde) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
#= | ||
This is an example of using DifferentialEquations.jl with Turing to model the Lotka–Volterra equations (predator-prey model). | ||
The model is adapted from the Turing documentation: https://turinglang.org/docs/tutorials/bayesian-differential-equations/ | ||
=# | ||
using OrdinaryDiffEq: ODEProblem, solve, Tsit5 | ||
|
||
# SciMLSensitivity is needed for reverse-mode AD on differential equations | ||
import SciMLSensitivity | ||
|
||
function lotka_volterra(du, u, p, t) | ||
α, β, γ, δ = p | ||
x, y = u | ||
du[1] = (α - β * y) * x # prey | ||
du[2] = (δ * x - γ) * y # predator | ||
return nothing | ||
end | ||
u0 = [1.0, 1.0] | ||
p = [1.5, 1.0, 3.0, 1.0] | ||
tspan = (0.0, 10.0) | ||
prob = ODEProblem(lotka_volterra, u0, tspan, p) | ||
sol = solve(prob, Tsit5(); saveat = 0.1) | ||
q = 1.7 | ||
odedata = rand.(Poisson.(q * Array(sol))) | ||
|
||
@model function DifferentialEquations_ODE(data, prob) | ||
α ~ truncated(Normal(1.5, 0.2); lower = 0.5, upper = 2.5) | ||
β ~ truncated(Normal(1.1, 0.2); lower = 0, upper = 2) | ||
γ ~ truncated(Normal(3.0, 0.2); lower = 1, upper = 4) | ||
δ ~ truncated(Normal(1.0, 0.2); lower = 0, upper = 2) | ||
q ~ truncated(Normal(1.7, 0.2); lower = 0, upper = 3) | ||
p = [α, β, γ, δ] | ||
predicted = solve(prob, Tsit5(); p = p, saveat = 0.1, abstol = 1e-6, reltol = 1e-6) | ||
for i in eachindex(predicted) | ||
data[:, i] ~ product_distribution(Poisson.(q .* predicted[i] .+ 1e-5)) | ||
end | ||
return nothing | ||
end | ||
|
||
model = DifferentialEquations_ODE(odedata, prob) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
#= | ||
This is an implementation of using Flux.jl with Turing to implement a Bayesian neural network. | ||
The model is adapted from the Turing documentation: https://turinglang.org/docs/tutorials/bayesian-neural-networks/ | ||
=# | ||
using Lux | ||
using Random | ||
using LinearAlgebra | ||
using Functors | ||
|
||
|
||
## Simulate data ## | ||
# Number of points to generate | ||
N = 80 | ||
M = round(Int, N / 4) | ||
rng = Random.default_rng() | ||
Random.seed!(rng, 1234) | ||
|
||
# Generate artificial data | ||
x1s = rand(Float32, M) * 4.5f0; | ||
x2s = rand(Float32, M) * 4.5f0; | ||
xt1s = Array([[x1s[i] + 0.5f0; x2s[i] + 0.5f0] for i in 1:M]) | ||
x1s = rand(Float32, M) * 4.5f0; | ||
x2s = rand(Float32, M) * 4.5f0; | ||
append!(xt1s, Array([[x1s[i] - 5.0f0; x2s[i] - 5.0f0] for i in 1:M])) | ||
|
||
x1s = rand(Float32, M) * 4.5f0; | ||
x2s = rand(Float32, M) * 4.5f0; | ||
xt0s = Array([[x1s[i] + 0.5f0; x2s[i] - 5.0f0] for i in 1:M]) | ||
x1s = rand(Float32, M) * 4.5f0; | ||
x2s = rand(Float32, M) * 4.5f0; | ||
append!(xt0s, Array([[x1s[i] - 5.0f0; x2s[i] + 0.5f0] for i in 1:M])) | ||
|
||
# Store all the data for later | ||
xs = [xt1s; xt0s] | ||
ts = [ones(2 * M); zeros(2 * M)] | ||
|
||
|
||
## Create neural network ## | ||
# Construct a neural network using Lux | ||
nn_initial = Chain(Dense(2 => 3, tanh), Dense(3 => 2, tanh), Dense(2 => 1, σ)) | ||
|
||
# Initialize the model weights and state | ||
ps, st = Lux.setup(rng, nn_initial) | ||
|
||
# Create a regularization term and a Gaussian prior variance term. | ||
alpha = 0.09 | ||
sigma = sqrt(1.0 / alpha) | ||
|
||
function vector_to_parameters(ps_new::AbstractVector, ps::NamedTuple) | ||
@assert length(ps_new) == Lux.parameterlength(ps) | ||
i = 1 | ||
function get_ps(x) | ||
z = reshape(view(ps_new, i:(i + length(x) - 1)), size(x)) | ||
i += length(x) | ||
return z | ||
end | ||
return fmap(get_ps, ps) | ||
end | ||
|
||
const nn = StatefulLuxLayer{true}(nn_initial, nothing, st) | ||
|
||
|
||
## Create Turing model ## | ||
# Specify the probabilistic model. | ||
@model function Lux_nn(xs, ts; sigma = sigma, ps = ps, nn = nn) | ||
# Sample the parameters | ||
nparameters = Lux.parameterlength(nn_initial) | ||
parameters ~ MvNormal(zeros(nparameters), Diagonal(abs2.(sigma .* ones(nparameters)))) | ||
|
||
# Forward NN to make predictions | ||
preds = Lux.apply(nn, xs, f32(vector_to_parameters(parameters, ps))) | ||
|
||
# Observe each prediction. | ||
for i in eachindex(ts) | ||
ts[i] ~ Bernoulli(preds[i]) | ||
end | ||
end | ||
|
||
model = Lux_nn(reduce(hcat, xs), ts) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to not seed the rng, as long as AD can run on any values, it's not too important which values it runs on. There's of course nothing wrong with seeding it but it's generally best IMO to strip the model to the bare minimum needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed it everywhere I could.
Lux.setup
seemed to require it, though. Want me to look for ways to avoid it, or is it okay?