Dynamic-MKP Benchmark Datasets

Citation Author(s):
Jonas
Skackauskas
Brunel University London
Submitted by:
Jonas Skackauskas
Last updated:
Wed, 06/15/2022 - 15:54
DOI:
10.21227/6bfm-bj82
Data Format:
Research Article Link:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

With increasing research on solving Dynamic Optimization Problems (DOPs), many metaheuristic algorithms and their adaptations have been proposed to solve them. However, from currently existing research results, it is hard to evaluate the algorithm performance in a repeatable way for combinatorial DOPs due to the fact that each research work has created its own version of a dynamic problem dataset using stochastic methods. Up to date, there are no combinatorial DOP benchmarks with replicable qualities. This work introduces a non-stochastic consistent Dynamic Multidimensional Knapsack Problem (Dynamic MKP) dataset generation method that is also extensible to solve the research replicability problem. Using this method, generated and published 1405 Dynamic MKP benchmark datasets using existing famous static MKP benchmark instances as the initial state.

Instructions: 

Benchmark dataset files are organised in folders by dynamism from SAM-0.01 to SAM-0.2. Each dynamism folder contains folders of generated DMKP benchmarks named after the initial state of static MKP benchmarks from popular GK and OR libraries. Finally, each benchmark folder contains 101 .dat files. Each represents a fully defined dynamic optimisation problem state. The dataset files are ordered in sequence from State000.dat to State100.dat. The State000.dat is the initial state taken from popular GK and OR MKP libraries, and the remaining 100 states are generated.

Comments

perfect

Submitted by UNAL INCE on Mon, 05/15/2023 - 04:56