Skip to main content

New Dataset

Created on June 3|Last edited on June 4
These are all using the new long_range_memory_dataset, and my 'candidate' learning rule.

Is one layer or two better?


Select runs that logged avg_loss
to visualize data in this line chart.
Run set
0

should it have a high or low hidden size?
seems like lower is better for these fast iterations.

Run set
0


What's the ideal learning rate, here?

really seems like I can keep going smaller.

Run set
0


Backprop-trained benchmark.

It's still waay better. Kinda concerning.

backprop_only
0


Does my learning rule gain an advantage in deeper networks?


Run set
0

actually compare to backprop

Run set
0

oops. that yellow guy is my local algorithm, now that I fixed it. Turns out that I had output and label swapped in the loss function. Let's redo this whole report now smh