1993 Working Papers

Recent Submissions

Now showing 1 - 5 of 12
  • Item
    Parallel programming with PICSIL1
    (Working Paper, University of Waikato, Department of Computer Science, 1993-10) Pearson, Murray W.; Melchert, Matthew
    This paper describes the background and development of PICSIL1 a visual language for specifying parallel algorithms using structured decomposition. PICSIL1 draws upon graphical and textual specification techniques; the first for high level structure of an algorithm, the second for more detailed functional specifications. The graphical specification techniques used in PICSIL1 are based on Data Flow Diagrams (DFDs) and are well suited to the assembly and interconnection of abstract modules. Minor modifications to DFDs have however had to be made to make them suitable for describing parallel algorithms. These include the ability to dynamically replicate sections of a diagram and change the structure of parts of a diagram dependent on data being processed. Work is proceeding on the development of an editor to allow the direct capture and editing of PICSIL1 descriptions. In the near future development of compiler and visual debugging tools are planned.
  • Item
    The design of an optimistic AND-parallel Prolog
    (Working Paper, University of Waikato, Department of Computer Science, 1993-10) Cleary, John G.; Olthof, Ian
    A distributed AND-parallel Prolog implementation is described. The system can correctly handle all pure Prolog programs. In particular, it deals with the problem of distributed backtracking. Conflicts in variable bindings are resolved by assigning a time value to every unification. Bindings with smaller time values are given precedence over those with larger time values. The algorithm is based on the optimistic Time Warp system, with Prolog-specific optimizations. The optimizations include two new unification algorithms that permit unification and backtracking in any order. The result is a system which can fully exploit the parallelism available in both dependent and independent AND-parallelism.
  • Item
    Informal introduction to Starlog
    (Working Paper, University of Waikato, Department of Computer Science, 1993-10) Cleary, John G.
    This report provides an informal and gentle introduction to the logic programming language Starlog and is intended to eventually form the first chapter of a book on Starlog. Like Prolog (a widely known and common logic programming language), Starlog programs consist of sets of Horn clauses. Starlog differs from Prolog in the way it is executed and in the use of logical time to order execution. The style of programming that results tends to be different from Prolog and similar to programs for relational databases.
  • Item
    Proving the existence of solutions in logical arithmetic
    (Working Paper, University of Waikato, Department of Computer Science, 1993-10) Cleary, John G.
    Logical arithmetic is a logically correct technique for real arithmetic in Prolog which uses constraints over interval representations for its implementation. Four problems with the technique are considered: answers are conditional and uninformative; iterative computations may lead to unboundedly large constraint networks; it is difficult and ineffective to deal with negation; and computing extrema is often not effective. A solution to these problems is proposed in the form of "existential intervals" which record the existence of a solution to a set of constraints within an interval. It is shown how to operate on existential intervals and how they solve the four problems.
  • Item
    Language inference from function words
    (Working Paper, Department of Computer Science, University of Waikato, 1993) Smith, Tony C.; Witten, Ian H.
    Language surface structures demonstrate regularities that make it possible to learn a capacity for producing an infinite number of well-formed expressions. This paper outlines a system that uncovers and characterizes regularities through principled wholesale pattern analysis of copious amounts of machine-readable text. The system uses the notion of closed-class lexemes to divide the input into phrases, and from these phrases infers lexical and syntactic information. The set of closed-class lexemes is derived from the text, and then these lexemes are clustered into functional types. Next the open-class words are categorized according to how they tend to appear in phrases and then clustered into a smaller number of open-class types. Finally these types are used to infer, and generalize, grammar rules. Statistical criteria are employed for each of these inference operations. The result is a relatively compact grammar that is guaranteed to cover every sentence in the source text that was used to form it. Closed-class inferencing compares well with current linguistic theories of syntax and offers a wide range of potential applications.