Results Evaluation and Awards
Since the awarding of prizes is one of the main sources of excitement in the
competition event, we have decided upon evaluation criteria as the basis
for the award decisions.
However, the biggest challenge for a compeititon
in Knowledge Engineering is to make "winner determination"
meaningful. We encourage all people to make up their own mind
and evaluate the tools by looking at them.
Evaluation Criteria
- support potential: what potential has the tool in helping the processes within the scope of the competition? Will the tool save time and resource?
- scope: how broad is the scope of the tool within the defined scope of the competition?
- usability: can the tool be easily used, accessed and/or configured? Could non planning-experts use it?
- interoperability: can the tool be integrated with other P&S technology? Are its interfaces well defined - can the software be easily used with other P&S software, or easily combined with third party planners?
- innovation: what is the quality of the scientific and technical innovations that underlie the software?
- wider comparison: How does the tool compare with KE software in other areas of AI? For example, could the software be subsumed by some other existing KBS KE tool?
- build quality: does the software appear robust? Has the software been well tested?
- relevance: to what degree does the tool address problems peculiar to KE for P&S? Is the software relevant or applicable to real-world applications?
- domain simulation applicability: how well did the competitors address the simulation domains using their tools? How many domains were the simulators tried on? How long did it take competitors to generate valid plans for the domains? How many problem instances were solved? What was the quality of the plans generated?