Greg Peckory - 1 year ago 88

Python Question

**The Problem**

I've been doing a bit of research on Particle Swarm Optimization, so I said I'd put it to the test.

The problem I'm trying to solve is the Balanced Partition Problem - or reduced simply to the Subset Sum Problem (where the sum is half of all the numbers).

It seems the generic formula for updating velocities for particles is

but I won't go into too much detail for this question.

Since there's no PSO attempt online for the Subset Sum Problem, I looked at the Travelling Salesman Problem instead.

They're approach for updating velocities involved taking sets of visited towns, subtracting one from another and doing some manipulation on that.

I saw no relation between that and the formula above.

**My Approach**

So I scrapped the formula and tried my own approach to the Subset Sum Problem.

I basically used

`gbest`

`pbest`

i.e - if my problem space is

`[1,2,3,4,5]`

`7`

`8`

`[1,None,3,None,None]`

`gbest`

`[None,2,3,None,None]`

`3`

`2`

`1`

`gbest`

I can post code but don't think it's necessary, you get the idea (I'm using python btw - hence

`None`

So basically, this worked to an extent, I got decent solutions out but it was very slow on larger data sets and values.

Am I encoding the problem and updating the particle "velocities" in a smart way?

Is there a way to determine if this will converge correctly?

Is there a resource I can use to learn how to create convergent "update" formulas for specific problem spaces?

Thanks a lot in advance!

Answer Source

**Encoding**

Yes, you're encoding this correctly: each of your bit-maps (that's effectively what your 5-element lists are) is a particle.

**Concept**

Your conceptual problem with the equation is because your problem space is a discrete lattice graph, which doesn't lend itself immediately to the update step. For instance, if you want to get a finer granularity by adjusting your learning rate, you'd generally reduce it by some small factor (say, 3). In this space, what does it mean to take steps only 1/3 as large? That's why you have problems.

The main possibility I see is to create 3x as many particles, but then have the transition probabilities all divided by 3. This still doesn't satisfy very well, but it does simulate the process somewhat decently.

**Discrete Steps**

If you have a very large graph, where a high velocity could give you dozens of transitions in one step, you can utilize a smoother distance (loss or error) function to guide your model. With something this small, where you have no more than 5 steps between any two positions, it's hard to work with such a concept.

Instead, you utilize an error function based on the estimated distance to the solution. The easy one is to subtract the particle's total from the nearer of 7 or 8. A harder one is to estimate distance based on that difference and the particle elements "in play".

**Proof of Convergence**

Yes, there is a way to do it, but it requires some functional analysis. In general, you want to demonstrate that the error function is convex over the particle space. In other words, you'd have to prove that your error function is a reliable distance metric, at least as far as relative placement goes (i.e. prove that a lower error *does* imply you're closer to a solution).

**Creating update formulae**

No, this is a heuristic field, based on shape of the problem space as defined by the particle coordinates, the error function, and the movement characteristics.

**Extra recommendation**

Your current allowable transitions are "add" and "delete" element. Include "swap elements" to this: trade one present member for an absent one. This will allow the trivial error function to define a convex space for you, and you'll converge in very little time.