This highly topical text considers the construction of the next generation of the Web, called the Semantic Web. This will enable computers to automatically consume Web-based information, overcoming the human-centric focus of the Web as it stands at present, and expediting the construction of a whole new class of knowledge-based applications that will intelligently utilise Web content. The text is structured into three main sections on knowledge representation techniques, reasoning with multi-agent systems, and knowledge services. For each of these topics, the text provides an overview of the state-of-the-art techniques and the popular standards that have been defined. Numerous small programming examples are given, which demonstrate how the benefits of the Semantic Web technologies can be realised at the present time. The main theoretical results underlying each of the technologies are presented, and the main problems and research issues which remain are summarised. Based on a course on 'Multi-Agent Systems and the Semantic Web' taught at the University of Edinburgh, this text is ideal for final-year undergraduate and graduate students in Mathematics, Computer Science, Artificial Intelligence, and Logic and researchers interested in Multi-Agent Systems and the Semantic Web.
At the present time, the Web is primarily designed for human consumption and not for computer consumption. This may seem like an unusual state of affairs, given that the web is vast and mature computerized information resource. However, we must recognize that the computer is presently used as the carrier of this information, and not as the consumer of the information. As a result, a great deal of the potential of the Web has yet to be realized.
This book explores the challenges of automatic computer-based processing of information on the Web. In effect, we want to enable computers to use Web-based information in much the same way as humans presently do. Our motivation is that computers have a brute-force advantage over humans. Where we can gather and process information from a handful of Web-based sources, a computer may download and compare thousands of such sources in a matter of seconds. Nonetheless, despite the apparent simplicity of this task, there are a great many issues that must be addressed if we are to make effective use of this information. As a result, the automated processing of Web-based information is still in its infancy. In this book, we show how many different techniques can be used together to address this task.