<?xml version="1.0" encoding="UTF-8"?>
<record>
  <title>Multiobjective Optimization Algorithms in the Comparing Continuous Optimizers Platform</title>
  <journal>Progress in Computing Applications</journal>
  <author>Tea TuÅ¡ar</author>
  <volume>8</volume>
  <issue>1</issue>
  <year>2018</year>
  <doi></doi>
  <url>http://www.dline.info/pca/fulltext/v8n1/pcav8n1_2.pdf</url>
  <abstract>Although the motivation to study multiobjective optimization algorithms comes from practice, there are only a
few challenging real-world problems freely available to the research community. Because of this, algorithm benchmarking is
performed primarily on artificial test problems. The most popular artificial test problems have characteristics that are not
well-represented in real-world problems. This and the predominant inadequate performance assessment methodology widen
the gap between theory and practice in the eld of multiobjective optimization. The paper suggests to instead compare the
algorithms with the anytime performance benchmarking approach of COCO (the Comparing Continuous Optimizers platform)
on more realistic artificial problem suites as well as suites with diverse real-world problems. By listing the benets of
sharing the real-world problems with the community, the paper hopes to encourage domain experts to embrace this practice.</abstract>
</record>
