Replies: 1 comment 1 reply
-
Thanks, @rengel8 for your suggestion. If you have It is definitely interesting. So, you would like to do mutation in circular queue (similar to how round robin scheduling works to assign tasks to CPU). We can create:
It will not be a problem to get them supported as independent mutation operators. It would be a good idea to change the mutation operators at the middle. For now, you can do experiments on whether changing the type of mutation/crossover would help to make faster progress. Here is the def on_generation(ga_instance):
print("Generation" + str(ga_instance.generations_completed))
best_fitness = ga_instance.best_solution(pop_fitness=ga_instance.last_generation_fitness)[1]
last_5_generation_idx = ga_instance.generations_completed - 5
if last_5_generation_idx >= 0:
last_5_generation_fitness = ga_instance.best_solutions_fitness[last_5_generation_idx]
if (last_5_generation_fitness - best_fitness) == 0:
ga_instance.mutation = ga_instance.scramble_mutation
if ga_instance.generations_completed == 20:
ga_instance.crossover = ga_instance.two_points_crossover |
Beta Was this translation helpful? Give feedback.
-
I'm coming back to pyGAD after some time of absence and based on a distinct use case I think to have seen something that I recognized earlier. It is how fast the best solution converges to a known set of values.
So it does find the result, but given the initial speed by random probing and after that the long phase of close to no advance I thought about a custom mutation function (one of the features of pyGAD), but maybe this is also more generally interesting also for others.
In my case I defined a fixed gene space with all possible image coordinates as integer values like:
So there are 100^6 (1000000000000) possible solutions in total and I think that the randomness to find new values is sometimes rounded to 0 (is this true?), or the wrong gene is slightly changed (adaptively), so that the GA has hard times to finish the run quicker.
My idea would be right now to write a mutation function in which new solutions are created by something like a gene step shifting. In my example I have 6 genes, so I would create up to 12 new solutions, where the gene values (lastly only one at a time) is modified to the next higher and also to the next lower value possible by the gene_spac - definition above.
Since one of these solutions will return with a higher fitness value, this procedure seems to be interesting for predefined gene spaces.
Would it be possible to do something like this i.e. if there was no fitness advance for 5 generations or so?
I mean the benefint of the randomness of pyGAD is to quickly get samples, which helps converging. This will be much better escpecially in the beginning, but at least in my example I wouldn't know a better way to enhance this process further, when it already is rather close.
Any thoughts on this are much appreciated..
Beta Was this translation helpful? Give feedback.
All reactions