Database

Creator

Date

Thumbnail

Search results

200 records were found.

With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing pr...
Fault tolerance is becoming an increasingly important issue, especially in mission-critical applications where data integrity is a paramount concern. Performance, however, remains a large driving force in the market place. Runtime reconfigurable hardware architectures have the power to balance fault tolerance with performance, allowing the amount of fault tolerance to be tuned at run-time. This paper describes a new built-in self-test designed to run on, and take advantage of, runtime reconfigurable architectures using the PipeRench architecture as a model. In addition, this paper introduces a new metric by which a user can set the desired fault tolerance of a runtime reconfigurable device
The article defines a class of architectures for pipeline reconfigurable FPGAs by parameterizing a generic model. This class of architecture is sufficiently general to allow exploration of the most important design trade-offs. The parameters include the word size and LUT size, the number of global busses and registers associated with each logic block, and the horizontal interconnect within each stripe. We have developed an area model for the architecture that allows us to quickly estimate the area of an instance of the architectural class as a function of the parameter values. We compare the estimates generated by this model to one instance of the architecture that we have designed and fabricated
Lenient languages, such as Id90, have been touted as among the best functional languages for massively parallel machines [AHN88]. Lenient evaluation combines non-strict semantics with eager evaluation [Tra9 1]. Non-strictness gives these languages more expressive power than strict semantics, while eager evaluation ensures the highest degree of parallelism. Unfortunately, non-strictness incurs a large overhead, as it requires dynamic scheduling and synchronization. As a result, many powerful program analysis techniques have been developed to statically determine when non-strictness is not required [CPJ85, Tra91, Sch94]. This paper studies a large set of lenient programs and quantifies the degree of non-strictness they require. We identify several forms of non-strictness, including functional, conditional, and data structure non-strictne...
Many modern parallel languages support dynamic creation of threads or require multithreading in their implementations. The threads describe the logical parallelism in the program. For ease of expression and better resource utilization, the logical parallelism in a program often exceeds the physical parallelism of the machine and leads to applications with many fine-grained threads. In practice, however, most logical threads need not be independent threads. Instead, they could be run as sequential calls, which are inherently cheaper than independent threads. The challenge is that one cannot generally predict which logical threads can be implemented as sequential calls. In lazy multithreading systems each logical thread begins execution sequentially (with the attendant efficient stack management and direct transfer of control and data). ...
The forecast of technology changes is one of the most important assumptions in most long-term energy, economic models. This project seeks to improve the ability of integrated assessment models (IA) to incorporate changes in technology, especially environmental technologies, cost and performance over time. In this reports, we presents results of research that examines past experience in controlling other major power plant emissions that might serve as a reasonable guide to future rates of technological progress in carbon capture and sequestration (CCS) systems. In particular, we focus on U.S. and worldwide experience with sulfur dioxide (SO2) and nitrogen oxide (NOx) control technologies over the past 30 years, and derive empirical learning rates for these technologies. The patterns of technology innovation are captured by our analysis ...
In its current configuration, the IECM provides a capability to model various conventional and advanced processes for controlling air pollutant emissions from coal-fired power plants before, during, or after combustion. The principal purpose of the model is to calculate the performance, emissions, and cost of power plant configurations employing alternative environmental control methods. The model consists of various control technology modules, which may be integrated into a complete utility plant in any desired combination. In contrast to conventional deterministic models, the IECM offers the unique capability to assign probabilistic values to all model input parameters, and to obtain probabilistic outputs in the form of cumulative distribution functions indicating the likelihood of different costs and performance results.
An overview of the current IECM structure appears in Figure 1-1. Briefly, the IECM was designed to permit the systematic evaluation of environmental control options for pulverized coal-fired (PC) power plants. Of special interest was the ability to compare the performance and cost of advanced pollution control systems to “conventional” technologies for the control of particulate, SO2 and NOx. Of importance also was the ability to consider pre-combustion, combustion and post-combustion control methods employed alone or in combination to meet tough air pollution emission standards. Finally, the ability to conduct probabilistic analyses is a unique capability of the IECM. Key results are characterized as distribution functions rather than as single deterministic values. In this report we document the analytical basis for several model enh...
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the k-median, k-center and kmeans problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in k-medians, we are allowed only swap moves. The local-search algorithm for k-median was analyzed by Arya et al. (SIAM J. Comput. 33(3):544-562, 2004), who used a clever “coupling” argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other fa...
In many optimization problems, a solution can be viewed as ascribing a “cost” to each client and the goal is to optimize some aggregation of the per-client costs. We often optimize some Lp-norm (or some other symmetric convex function or norm) of the vector of costs—though different applications may suggest different norms to use. Ideally, we could obtain a solution that optimized several norms simultaneously.
Want to know more?If you want to know more about this cutting edge product, or schedule a demonstration on your own organisation, please feel free to contact us or read the available documentation at http://www.keep.pt/produtos/retrievo/?lang=en