I'm having 2 node cluster with spark standalone cluster manager. I'm triggering more than one job using same
conf.set("spark.scheduler.allocation.file", sys.env("SPARK_HOME") + "/conf/fairscheduler.xml")
I don't think standalone is the problem. You described creating only one pool, so I think your problem is that you need at least one more pool and assign each job to a different pool.
FAIR scheduling is done across pools, anything within the same pool will run in FIFO mode anyway.
This is based on the documentation here: https://spark.apache.org/docs/latest/job-scheduling.html#default-behavior-of-pools