Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: Train neural network for roman pots momentum reconstruction #10

Draft
wants to merge 31 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
4a5c335
Add initial script for training neural network for roman pots momentu…
rahmans1 Jan 10, 2024
77a117d
Simplify layer description code using nn.ModuleList
rahmans1 Jan 12, 2024
1a5d06e
Calculate the sample mean and standard deviation to pass on as standa…
rahmans1 Jan 12, 2024
7398a0b
Use number of epochs and learning rate as training hyper parameters
rahmans1 Jan 12, 2024
c4e1703
Update number format and precision in training progress message
rahmans1 Jan 12, 2024
45ffea0
Functions to run experiments with parametrize inputs to the neural ne…
rahmans1 Jan 12, 2024
92bc46e
Import argparse and sys python modules
rahmans1 Jan 12, 2024
5ad5a79
Fix typo in call to add_argument function
rahmans1 Jan 12, 2024
e1b0786
Fix typo. Missing '--' in list of arguments.
rahmans1 Jan 12, 2024
c4310f3
Input file locations will be passed along with list of hyperparameters
rahmans1 Jan 12, 2024
6b7dae7
Fetch the number of training inputs from hyperparameter list
rahmans1 Jan 12, 2024
34eee45
Fix typo in variable name `multiplier`
rahmans1 Jan 12, 2024
2e25da5
Cast hyperparameters to the appropriate numerical datatypes
rahmans1 Jan 12, 2024
65310c7
First commit of snakefile for roman pots neural network training. Gen…
rahmans1 Jan 19, 2024
130ff48
Unhash the detector config and detector version. Only hash the parame…
rahmans1 Jan 19, 2024
08fa4c5
Parametrise subsystem and model type for readability
rahmans1 Jan 19, 2024
2cbb87e
Truncate the hash to first 6 hexdigits. 6 hexdigits is equivalent to …
rahmans1 Jan 22, 2024
81bf1d5
Introduce steering file to simulate events for model training purposes
rahmans1 Jan 22, 2024
1e66076
Add rule to generate training events. Use DETECTOR_VERSION inferred f…
rahmans1 Jan 23, 2024
570f99d
Use epic_ip6 geometry for faster processing of far-forward events
rahmans1 Jan 23, 2024
dd7e61b
Add rule to extract hit information necessary for model training
rahmans1 Jan 23, 2024
ec5d176
Use realistic hyperparameter values
rahmans1 Jan 23, 2024
d89bae8
Add rule to train network and save models and artifacts.
rahmans1 Jan 24, 2024
f34159a
Unnecessary because directory structure is automatically created by rule
rahmans1 Jan 24, 2024
d81ab52
Variable is an array with one element. So, indexing needs to be used
rahmans1 Jan 25, 2024
805c2cd
Split up default target rule into 3 rules so that proper dependency …
rahmans1 Jan 25, 2024
2d1a65e
Simplify workflow. Build dependency between generate and process even…
rahmans1 Jan 30, 2024
acbb6c5
Use a nested directory approach to uniquely identify models instead o…
rahmans1 Jan 30, 2024
5496e3c
Changed Snakefile and preprocess_model_training_data.cxx to include t…
Feb 12, 2024
b0c5198
Extract detector version
rahmans1 Jun 4, 2024
a695c51
The default position of the planes updated to reflect current locatio…
rahmans1 Jun 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
229 changes: 229 additions & 0 deletions benchmarks/roman_pots/Snakefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,229 @@
from itertools import product

DETECTOR_PATH = os.environ["DETECTOR_PATH"]
DETECTOR_VERSION = os.environ["DETECTOR_VERSION"]
SUBSYSTEM = "roman_pots"
BENCHMARK = "dense_neural_network"
DETECTOR_CONFIG = "epic_ip6"
NEVENTS_PER_FILE = 5
NFILES = range(1,6)
MODEL_PZ = {
'num_epochs' : [100],
'learning_rate' : [0.01],
'size_input' : [4],
'size_output': [1],
'n_layers' : [3,6],
'size_first_hidden_layer' : [128],
'multiplier' : [0.5],
'leak_rate' : [0.025],
}
MODEL_PY = {
'num_epochs' : [100],
'learning_rate' : [0.01],
'size_input' : [3],
'size_output': [1],
'n_layers' : [3,6],
'size_first_hidden_layer' : [128],
'multiplier' : [0.5],
'leak_rate' : [0.025]
}
MODEL_PX = {
'num_epochs' : [100],
'learning_rate' : [0.01],
'size_input' : [3],
'size_output': [1],
'n_layers' : [3,7],
'size_first_hidden_layer' : [128],
'multiplier' : [0.5],
'leak_rate' : [0.025]
}

rule all:
input:
expand("results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/raw_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.edm4hep.root",
index=NFILES),
expand("results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.txt",
index=NFILES),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_pz/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_pz.pt",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PZ["num_epochs"],
learning_rate=MODEL_PZ["learning_rate"],
size_input=MODEL_PZ["size_input"],
size_output=MODEL_PZ["size_output"],
n_layers=MODEL_PZ["n_layers"],
size_first_hidden_layer=MODEL_PZ["size_first_hidden_layer"],
multiplier=MODEL_PZ["multiplier"],
leak_rate=MODEL_PZ["leak_rate"]
Comment on lines +49 to +56
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
num_epochs=MODEL_PZ["num_epochs"],
learning_rate=MODEL_PZ["learning_rate"],
size_input=MODEL_PZ["size_input"],
size_output=MODEL_PZ["size_output"],
n_layers=MODEL_PZ["n_layers"],
size_first_hidden_layer=MODEL_PZ["size_first_hidden_layer"],
multiplier=MODEL_PZ["multiplier"],
leak_rate=MODEL_PZ["leak_rate"]
**MODEL_PZ,

),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_pz/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_pz.png",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PZ["num_epochs"],
learning_rate=MODEL_PZ["learning_rate"],
size_input=MODEL_PZ["size_input"],
size_output=MODEL_PZ["size_output"],
n_layers=MODEL_PZ["n_layers"],
size_first_hidden_layer=MODEL_PZ["size_first_hidden_layer"],
multiplier=MODEL_PZ["multiplier"],
leak_rate=MODEL_PZ["leak_rate"]
),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_py.pt",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PY["num_epochs"],
learning_rate=MODEL_PY["learning_rate"],
size_input=MODEL_PY["size_input"],
size_output=MODEL_PY["size_output"],
n_layers=MODEL_PY["n_layers"],
size_first_hidden_layer=MODEL_PY["size_first_hidden_layer"],
multiplier=MODEL_PY["multiplier"],
leak_rate=MODEL_PY["leak_rate"]
),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_py.png",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PY["num_epochs"],
learning_rate=MODEL_PY["learning_rate"],
size_input=MODEL_PY["size_input"],
size_output=MODEL_PY["size_output"],
n_layers=MODEL_PY["n_layers"],
size_first_hidden_layer=MODEL_PY["size_first_hidden_layer"],
multiplier=MODEL_PY["multiplier"],
leak_rate=MODEL_PY["leak_rate"]
),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_px.pt",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PX["num_epochs"],
learning_rate=MODEL_PX["learning_rate"],
size_input=MODEL_PX["size_input"],
size_output=MODEL_PX["size_output"],
n_layers=MODEL_PX["n_layers"],
size_first_hidden_layer=MODEL_PX["size_first_hidden_layer"],
multiplier=MODEL_PX["multiplier"],
leak_rate=MODEL_PX["leak_rate"]
),
expand("results/"+DETECTOR_VERSION+"/{detector_config}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_px.png",
detector_config=DETECTOR_CONFIG,
num_epochs=MODEL_PX["num_epochs"],
learning_rate=MODEL_PX["learning_rate"],
size_input=MODEL_PX["size_input"],
size_output=MODEL_PX["size_output"],
n_layers=MODEL_PX["n_layers"],
size_first_hidden_layer=MODEL_PX["size_first_hidden_layer"],
multiplier=MODEL_PX["multiplier"],
leak_rate=MODEL_PX["leak_rate"]
)




rule roman_pots_generate_events:
input:
script="steering_file.py"
params:
detector_path=DETECTOR_PATH,
nevents_per_file=NEVENTS_PER_FILE,
detector_config=DETECTOR_CONFIG
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/raw_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.edm4hep.root"
shell:
"""
npsim --steeringFile {input.script} \
--compactFile {params.detector_path}/{params.detector_config}.xml \
Comment on lines +121 to +129
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would drop global variables in the leaf rules, and use wildcards where possible

Suggested change
detector_path=DETECTOR_PATH,
nevents_per_file=NEVENTS_PER_FILE,
detector_config=DETECTOR_CONFIG
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/raw_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.edm4hep.root"
shell:
"""
npsim --steeringFile {input.script} \
--compactFile {params.detector_path}/{params.detector_config}.xml \
detector_path=DETECTOR_PATH,
nevents_per_file=NEVENTS_PER_FILE,
output:
"results/"+DETECTOR_VERSION+"/{DETECTOR_CONFIG}/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/raw_data/"+DETECTOR_VERSION+"_{DETECTOR_CONFIG}_{index}.edm4hep.root"
shell:
"""
npsim --steeringFile {input.script} \
--compactFile {params.detector_path}/{wildcards.DETECTOR_CONFIG}.xml \

--outputFile {output} \
-N {params.nevents_per_file}
"""

rule roman_pots_preprocess_model_training_data:
input:
data = "results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/raw_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.edm4hep.root",
script = "preprocess_model_training_data.cxx"
output:
full = "results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.txt",
lo = "results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_lo_{index}.txt"
shell:
"""
root -q -b {input.script}\"(\\\"{input.data}\\\",\\\"{output.full}\\\",\\\"{output.lo}\\\")\"
"""

rule roman_pots_train_model_pz:
input:
data = ["results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.txt".format(index=index) for index in NFILES],
script = "train_dense_neural_network.py"
params:
detector_version=DETECTOR_VERSION,
detector_config=DETECTOR_CONFIG,
subsystem=SUBSYSTEM,
benchmark=BENCHMARK
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_pz/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_pz.pt",
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_pz/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_pz.png"
shell:
"""
python {input.script} --input_files {input.data} --model_name model_pz --model_dir results/{params.detector_version}/{params.detector_config}/detector_benchmarks/{params.subsystem}/{params.benchmark}/artifacts/model_pz/num_epochs_{wildcards.num_epochs}/learning_rate_{wildcards.learning_rate}/size_input_{wildcards.size_input}/size_output_{wildcards.size_output}/n_layers_{wildcards.n_layers}/size_first_hidden_layer_{wildcards.size_first_hidden_layer}/multiplier_{wildcards.multiplier}/leak_rate_{wildcards.leak_rate} --num_epochs {wildcards.num_epochs} --learning_rate {wildcards.learning_rate} --size_input {wildcards.size_input} --size_output {wildcards.size_output} --n_layers {wildcards.n_layers} --size_first_hidden_layer {wildcards.size_first_hidden_layer} --multiplier {wildcards.multiplier} --leak_rate {wildcards.leak_rate}
"""

rule roman_pots_train_model_py:
input:
data = ["results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.txt".format(index=index) for index in NFILES],
script = "train_dense_neural_network.py"
params:
detector_version=DETECTOR_VERSION,
detector_config=DETECTOR_CONFIG,
subsystem=SUBSYSTEM,
benchmark=BENCHMARK
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_py.pt",
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_py.png"
shell:
"""
python {input.script} --input_files {input.data} --model_name model_py --model_dir results/{params.detector_version}/{params.detector_config}/detector_benchmarks/{params.subsystem}/{params.benchmark}/artifacts/model_py/num_epochs_{wildcards.num_epochs}/learning_rate_{wildcards.learning_rate}/size_input_{wildcards.size_input}/size_output_{wildcards.size_output}/n_layers_{wildcards.n_layers}/size_first_hidden_layer_{wildcards.size_first_hidden_layer}/multiplier_{wildcards.multiplier}/leak_rate_{wildcards.leak_rate} --num_epochs {wildcards.num_epochs} --learning_rate {wildcards.learning_rate} --size_input {wildcards.size_input} --size_output {wildcards.size_output} --n_layers {wildcards.n_layers} --size_first_hidden_layer {wildcards.size_first_hidden_layer} --multiplier {wildcards.multiplier} --leak_rate {wildcards.leak_rate}
"""

rule roman_pots_train_model_px:
input:
data = ["results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_{index}.txt".format(index=index) for index in NFILES],
script = "train_dense_neural_network.py"
params:
detector_version=DETECTOR_VERSION,
detector_config=DETECTOR_CONFIG,
subsystem=SUBSYSTEM,
benchmark=BENCHMARK
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_px.pt",
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_px.png"
shell:
"""
python {input.script} --input_files {input.data} --model_name model_px --model_dir results/{params.detector_version}/{params.detector_config}/detector_benchmarks/{params.subsystem}/{params.benchmark}/artifacts/model_px/num_epochs_{wildcards.num_epochs}/learning_rate_{wildcards.learning_rate}/size_input_{wildcards.size_input}/size_output_{wildcards.size_output}/n_layers_{wildcards.n_layers}/size_first_hidden_layer_{wildcards.size_first_hidden_layer}/multiplier_{wildcards.multiplier}/leak_rate_{wildcards.leak_rate} --num_epochs {wildcards.num_epochs} --learning_rate {wildcards.learning_rate} --size_input {wildcards.size_input} --size_output {wildcards.size_output} --n_layers {wildcards.n_layers} --size_first_hidden_layer {wildcards.size_first_hidden_layer} --multiplier {wildcards.multiplier} --leak_rate {wildcards.leak_rate}
"""

rule roman_pots_train_model_py_lo:
input:
data = ["results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_lo_{index}.txt".format(index=index) for index in NFILES],
script = "train_dense_neural_network.py"
params:
detector_version=DETECTOR_VERSION,
detector_config=DETECTOR_CONFIG,
subsystem=SUBSYSTEM,
benchmark=BENCHMARK
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_py_lo.pt",
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_py/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_py_lo.png"
shell:
"""
python {input.script} --input_files {input.data} --model_name model_py --model_dir results/{params.detector_version}/{params.detector_config}/detector_benchmarks/{params.subsystem}/{params.benchmark}/artifacts/model_py/num_epochs_{wildcards.num_epochs}/learning_rate_{wildcards.learning_rate}/size_input_{wildcards.size_input}/size_output_{wildcards.size_output}/n_layers_{wildcards.n_layers}/size_first_hidden_layer_{wildcards.size_first_hidden_layer}/multiplier_{wildcards.multiplier}/leak_rate_{wildcards.leak_rate} --num_epochs {wildcards.num_epochs} --learning_rate {wildcards.learning_rate} --size_input {wildcards.size_input} --size_output {wildcards.size_output} --n_layers {wildcards.n_layers} --size_first_hidden_layer {wildcards.size_first_hidden_layer} --multiplier {wildcards.multiplier} --leak_rate {wildcards.leak_rate}
"""

rule roman_pots_train_model_px_lo:
input:
data = ["results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/processed_data/"+DETECTOR_VERSION+"_"+DETECTOR_CONFIG+"_lo_{index}.txt".format(index=index) for index in NFILES],
script = "train_dense_neural_network.py"
params:
detector_version=DETECTOR_VERSION,
detector_config=DETECTOR_CONFIG,
subsystem=SUBSYSTEM,
benchmark=BENCHMARK
output:
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/model_px_lo.pt",
"results/"+DETECTOR_VERSION+"/"+DETECTOR_CONFIG+"/detector_benchmarks/"+SUBSYSTEM+"/"+BENCHMARK+"/artifacts/model_px/num_epochs_{num_epochs}/learning_rate_{learning_rate}/size_input_{size_input}/size_output_{size_output}/n_layers_{n_layers}/size_first_hidden_layer_{size_first_hidden_layer}/multiplier_{multiplier}/leak_rate_{leak_rate}/LossVsEpoch_model_px_lo.png"
shell:
"""
python {input.script} --input_files {input.data} --model_name model_px --model_dir results/{params.detector_version}/{params.detector_config}/detector_benchmarks/{params.subsystem}/{params.benchmark}/artifacts/model_px/num_epochs_{wildcards.num_epochs}/learning_rate_{wildcards.learning_rate}/size_input_{wildcards.size_input}/size_output_{wildcards.size_output}/n_layers_{wildcards.n_layers}/size_first_hidden_layer_{wildcards.size_first_hidden_layer}/multiplier_{wildcards.multiplier}/leak_rate_{wildcards.leak_rate} --num_epochs {wildcards.num_epochs} --learning_rate {wildcards.learning_rate} --size_input {wildcards.size_input} --size_output {wildcards.size_output} --n_layers {wildcards.n_layers} --size_first_hidden_layer {wildcards.size_first_hidden_layer} --multiplier {wildcards.multiplier} --leak_rate {wildcards.leak_rate}
"""
Loading