Quantum finite automata were introduced by C. Moore, J. P. Crutchfield [4], and by A. Kondacs and J. Watrous [3]. This notion is not a generalization of the deterministic finite automata. Moreover, in [3] it was proved that not all regular languages can be recognized by quantum finite automata. A. Ambainis and R. Freivalds [1] proved Chat for some languages quantum finite automats may be exponentially more concise rather than both deterministic and probabilistic finite automata. In this paper we introduce the notion of quantum finite multi-tape automata and prove that there is a language recognized by a quantum finite automaton but not by deterministic or probabilistic finite automats. This is the first result on a problem which can be solved by a quantum computer but not by a deterministic or probabilistic computer. Additionally we discover unexpected probabilistic automata recognizing complicated languages.
We construct a hierarchy of regular languages such that the current language in the hierarchy can be accepted by 1-way quantum finite automata with a probability smaller than the corresponding probability for the preceding language in the hierarchy. These probabilities converge to 1/2.
An algorithm given by Ambainis and Freivalds [1] constructs a quantum finite automaton (QFA) withO(logp) states recognizing the language L p = a i |i is divisible by p with probability 1 − ε, for any ε > 0 and arbitrary prime p. In [4] we gave examples showing that the algorithm is applicable also to quantum automata of very limited size. However, the Ambainis-Freivalds algoritm is tailored to constructing a measure-many QFA (defined by Kondacs and Watrous [2]), which cannot be implemented on existing quantum computers. In this paper we modify the algorithm to construct ameasure-once QFA of Moore and Crutchfield [3] and give examples of parameters for this automaton. We show for the language L p that a measure-once QFA can be twice as space efficient as measure-many QFA’s.
An abstract data modelling approach to the problems of econometric measurement, and system definition and analysis, is proposed. The approach is based on an extension of classical ideas of mathematical systems theories, developed for the purpose of laying formal foundations for a theory of information systems. In the proposed approach, an economic system is defined by the measurement data, and can be meaningfully discussed and, with the help of computing tools, also processed, even before statistical redundancy reductions have been made. A distinctive feature of the formalism is the separation of local and global analyses of the system variables.
Despite the emphasis on securities rather than on cash streams in general financialtheories, the concept of cash stream remains useful in modelling realistic financial decision problems.The paper presents a unified view on cash stream modelling, valid in both continuous and discretetime, in the standard analytical framework of Schwartz distributions. Abstract cash stream spacesare defined by stipulating certain general properties, and are then identified as some well-knownLebesgue lattices of measures on the real line. All basic financial notions such as discounting orpreference ordering carry over to the general case, and the linear decision problems become instancesof infinite-dimensional linear programming.
Considered are operators that leave the set of non-invertible (in the sense of Ehrenpreis) distributions stable. They simultaneously generalise the operation of convolution by a distribution with compact support and the operation of multiplication by a real analytic function; they are here called pseudo-convolutions since they also generalise pseudo-differential operators. (It is shown that the elliptic real analytic pseudo-differential operators leave both the non-invertible and the invertible distributions invariant.) But when the condition of real-analyticity is relaxed, such operators may map a non-invertible distribution to one invertible -- given that the invertibility in both cases concerns the same function space. By varying the space, however, one can measure the 'loss of non-invertibily' that a non-analytic perturbation may introduce. This phenomenon is here studied using the Beurling classes of functions and measuring the regularity of operator symbols in the Denjoy-Carleman sense; the Gevrey case turns out particularly simple.
Electronic group support systems (GSS) are distributed computer technologies, developed to supportcooperative work in organisations, predominantly in a business environment. Electronic meetingsystems (EMS), for example, are GSS specifically developed to support meetings. The considerableexperience gathered from the use of GSS in business is briefly summarised and discussed, with theaim of determining the potential use of this technology in university education. The current transfer ofthe GSS technology into universities is critically examined, in particular in the perspective of therapidly emerging global platform of distributed multimedia technology. The discussion isexemplified with the GroupSystems EMS developed at the University of Arizona, USA.
Within decades, the emerging Distributed Multimedia Technology (DMT) will cause major changesin the global economy and in the social and political structures. The inevitable metamorphosis of the‘knowledge industry’, which schools and universities are part of, should be particularly dramatic.Short-term economic arguments suggest that the physical and administrative structures defininginstitutions of higher learning today may soon become redundant as knowledge becomes a fluentcommodity for ‘just in time’ delivery. Opposing this trend is the considerable inertia of theeducational system, its internal power structures, and the limited technological knowledge of theeducators. Calling for moderation are also theoretical arguments representing traditional academic andmoral values.
Firms engaging in international trade are subject to risk from fluctuation of currency exchange rates. Although such risk can be actively managed with suitable hedging strategies, the problem is a complex one, and small firms rarely have access to required expertise. This paper proposes a hybrid intelligent system, called HYNES, which integrates neural network and expert system technologies, to support exchange rate risk management in small firms. The three phases of the development of HYNES, the analysis, design and implementation, are briefly discussed, giving some attention to the open problems of general intelligent systems development.
Numerical time series, but especially periodic such, are characterized up to pertinent symmetries by families of norms. The electricity consumption by a household, recorded daily during a month’s time, say, may then be encoded in a sequence of numbers; for example, as follows: the mean daily consumption, the mean daily variation of the consumption, the variation of the variation, the variation of the variation of the variation, etc. Now, replacing each of these numbers by the digits 0, 1, or 2, to say that a number is “lowâ€, “mediumâ€, or “highâ€, in relation to a collection of households, one naturally partitions the collection by the strings of these three digits; the household labeled 102   has then medium daily consumption, low daily variation, but high variation of variation, etc. We generally discuss this innocent idea and examine it in three ways: by way of toy examples, through its mathematical model (in detail presented elsewhere) and by accordingly classifying some actual electricity consumption data.
We survey papers on problems of learning by quantum computers.The quest of quantum learning, as that of quantum computation,is to produce tractable quantum algorithms in situations, where tractableclassical algorithms do not exist, or are not known to exist. We see essentiallythree papers [18, 92, 93], which in this sense separate quantumand classical learning. We also briefly sample papers on quantum search,quantum neural processing, and quantum games, where quantum learningproblems are likely to appear.
A language L(n) of n-tuples of words which is recognized by a n-tape rational finite-probabilistic automaton with probability 1-ε, for arbitrary ε > 0, is called quasideterministic. It is proved in [Fr 81], that each rational stochastic language is a projection of a quasideterministic language L(n) of n-tuples of words. Had projections of quasideterministic languages on one tape always been rational stochastic languages, we would have a good characterization of the class of the rational stochastic languages. However we prove the opposite in this paper. A two-tapequasideterministic language exists, the projection of which on the first tape is a nonstochastic language.
Authors have considered a learning problem, which occurs when changes in the knowledge system of a firm (learning) alter its business objectives (preference). Grounds for evaluating learning may become known only after the learning. The article presents a review of current learning theories and the rational choice.
To the extent a cognitive artifact extends natural language, questions of the former should be preceded by answers to those of the latter; and, questions about cognitive science and pertinent technology should begin by asking how one may verbalise one’s ideas about cognition, one’s own cognition to start with. One does that, it is plain, in two ways: one talks of one’s thoughts and one’s feelings. One thus sees oneself not as one but as at least two. Not to cause unrest, however, one continues to talk of oneself as one, calling one’s pluralistic faculties in the singular as the Soul or the Intellect, the nest of the classical trivium of the beautiful, the good, and the intelligent. Degraded to tangible by social demand, the Intellect becomes intelligence plain, semantically rooted in behavior, hence operational, prerogative of machine. One’s remaining spiritual faculties, collectively labeled psyche, are something to aid by therapy or drugs to keep one from acting strange. Bottled in rational formaldehyde for almost a century, only recently get they restituted by science as key actors of cognition. But in public space they remain non gratae, increasingly so indeed as the digital strait jacket steadily tightens around people’s souls, taking the spark out of their social and professional presence, the spark that survived both Descartes and Marx. A century after Freud, four after Shakespeare, and four and twenty after Plato, feelings remain a mystery eluding words. Perhaps they should? We share our feelings on these vital mushy matters, irrespectively
Freivalds and Smith [R. Freivalds, C.H. Smith Memory limited inductive inference machines, Springer Lecture Notes in Computer Science 621 (1992) 19-29] proved that probabilistic limited memory inductive inference machines can learn with probability 1 certain classes of total recursive functions, which cannot be learned by deterministic limited memory inductive inference machines. We introduce quantum limited memory inductive inference machines as quantum finite automata acting as inductive inference machines. These machines, we show, can learn classes of total recursive functions not learnable by any deterministic, nor even by probabilistic, limited memory inductive inference machines.
Introduces tensor-product neural networks, composed of a layer of univariate neurons followed by a net of polynomial post-processing. We look at the general approximation problem by these networks observing in particular their relationship to the Stone-Weierstrass theorem for uniform function algebras. The implementation of the post-processing as a two-layer network with logarithmic and exponential neurons leads to potentially important 'generalised' product networks which, however, require a complex approximation theory of the Müntz-Szasz-Ehrenpreis type. A backpropagation algorithm for product networks is presented and used in three computational experiments. In particular, approximation by a sigmoid product network is compared to that of a single-layer radial basis network and a multiple-layer sigmoid network.