Skip to content

ParaLU & QuartLU – activation functions for Neural Network

Last updated on May 6, 2026

I just tried some simple activation functions for UAT/Neural Network – ParaLU and QuartLU which are Parabolic Linear Unit and Quartic Linear Unit.

I made a 1000 features and 4 hidden layers neural network to approximate 21 pairs of x and y. In the test, ParaLU converges much slower in training than Sigmoid but similarly or sometime better than ReLU. QuartLU converges also much slower than Sigmoid but obviously better than ParaLU and Relu. Sigmoid is a really monster in training that it converges really fast.

ParaLU and QuartLU below are all continously differentiable in all x.

1) ParaLU’s math:

if x<-scope, y=0 and y’=0;

else if x<scope, y=(x+scope)**2/(scope*4) and y’=(x+scope)/(2*scope);

else y=x and y’=1.

It seems that scope=1 is good for ParaLU above.

2) QuartLU’s math

if x<-scope, y=0 and y’=0;

else if x<(scope/3), y=(x+scope)**4*27/(256*scope**3);

else y=x and y’=1.

It seems that scope=1.5 is good for QuartLU above.

Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *