2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006

Accelerated codebook searching in a SOM-based Vector Quantizer Arijit Laha, Member, IEEE, Bhabatosh Chanda and Nikhil R. Pal, Fellow, IEEE Abstract— Kohonen’s SOM algorithm has been used successfully by some researchers for designing codebooks. However, while performing an exhaustive search in a large codebook with high dimensional vectors, the encoder faces a significant computational barrier. Due to its topology preservation property, SOM holds a good promise of being utilized for fast codebook searching. In this paper we develop a method for fast codebook searching by exploiting the topology preservation property of SOM. This method performs non-exhaustive search of the codebook to find a good match for a input vector. The method is a general one that can be applied to various signal domains. In the present paper its efficacy is demonstrated with spatial vector quantization of gray-scale images.

I. I NTRODUCTION Kohonen’s Self-Organizing Map (SOM) [1] is a competitive learning neural network with very good clustering capability and consequently has attracted attention of the researchers in the field of vector quantization [3], [4], [5], [6]. SOM is easy to implement and has found numerous applications. A discussion of general features of SOM algorithm can give us enough motivation in using it for designing vector quantizers. SOM uses a two-layer network. The input layer has the same number of nodes as that of the dimension of input data. The second layer is a competitive layer typically arranged on a rectangular lattice. The layers are completely connected. Thus each node in the competitive layer has a weight vector associated with it. On presentation of an input vector, the competitive layer finds the winner node and weight vectors associated with the winner and its neighbors (on the lattice) are updated ‘on the fly’. Due to incorporation of neighborhood update (opposed to the “winner only” update in pure competitive learning) in the training stage SOM networks exhibit the interesting properties of topology preservation and density matching [1]. The former means that the vectors near-by in the input space are mapped to the same node or nodes near-by in the output space. The density matching property refers to the fact that after training, the distribution of the weight vectors associated with the nodes is proportional to some function of the the distribution of the training vectors in the input space. Thus more codevectors are placed in the regions with high density of training vectors. In other clustering algorithms, a dense and well separated cluster is usually represented by a single cluster center. Though it is very good if such clusters are subsequently used for pattern classification task, in case of vector quantization, where the aim is to reduce the reproduction error, this may Arijit Laha is with Institute for Development and Research in Banking Technology, Castle Hills, Hyderabad 500 057, India (email: [email protected]). Bhabatosh Chanda and Nikhil R. Pal are with Electronics & Communication Sciences Unit, Indian Statistical Institute, 203 B. T. Road Calcutta 700 108, India (email: {chanda, nikhil}@isical.ac.in

0-7803-9490-9/06/$20.00/©2006 IEEE

not be that good. The total reproduction error of a VQ is the sum of granular error and overload error [2]. The granular error is the component of the quantization error due to the granular nature of the quantizer for an input that lies within a bounded cell. Due to the density matching property, the SOM places several prototypes in a densely populated region and thus makes the quantization cells small in such area. This leads to the reduction in the granular error component resulting in preservation of finer details. This is a highly desirable property in VQ. The overload error component of quantization error arises when the input lies in any unbounded cell representing data at the boundary of training sample distribution. Since the distribution of the codewords replicate the distribution of the training data, the overload error is also low when the distribution of the test data is well represented by that of the training data. In [7] Chang and Gray introduced an on-line technique for VQ design using stochastic gradient algorithm which can be considered a special case of the SOM algorithm and it is shown to perform slightly better than GLA. Nasrabadi and Feng [3] also used SOM for VQ design and demonstrated performance better than or similar to GLA. Yair et al. [4] used a combination of SOM and stochastic relaxation and obtained consistently better performance than GLA. Amerijckx et al. [5] used SOM algorithm to design a VQ for the coefficients of discrete cosine transform of the image blocks. The output of VQ encoder is further compressed using entropy coding. They reported performance equivalent to or better than standard JPEG algorithm. In [6] SOMs are used for designing VQ and a novel surface-fitting method for refining the codebook is introduced to achieve better psychovisual quality of the reproduced image. Apart from finding good codevectors, the VQ performance depends crucially on the size of the codebook and the dimension of the vectors. Increasing the codebook size, the input space can be represented to a finer degree. Further, the VQ works by directly exploiting the inter-block statistical redundancies, i.e., the redundancies between pixels being in same block. Therefore, a larger block enables the quantizer to exploit the statistical redundancy existing in the data to a greater degree [2]. However, using large number of codevectors and high dimensional vectors introduces a serious performance bottleneck on the part of the encoder. Given any vector to be encoded the encoder has to search the codebook for the best matching codevector. Thus to make full search of a large codebook with high dimensional vectors, the encoder incurs large computational overhead. To circumvent this difficulty many techniques have been developed. These techniques apply various constraints on the structure of the codebook and use suitably altered encoding algorithm. These

3306

techniques can be divided into two major categories. The memoryless VQs perform encoding/decoding of each vector independently. Well known examples in this category include tree-Structured VQ (TSVQ), classified VQ, shape-gain VQ, multistage VQ, hierarchical VQ etc. Methods in the other category depend on the past information (thus having memory) for their operations. Prominent members include predictive VQ, recursive or feedback VQ, finite state VQ (FSVQ) etc. Comprehensive reviews of these methods can be found in [2], [8], [9]. All these techniques are aimed at reducing the codebook search complexity without a significant loss in reproduction quality. Among some of the recent works, in [10] Lai and Liaw developed a fast codebook searching algorithm based on projection and triangular inequality that uses the features of vectors to reject impossible codes. In [11] a method for designing predictive vector quantizer using deterministic annealing is proposed. An approach to design an optimal FSVQ is proposed in [12]. A novel method of designing TSVQ can be found in [13]. If SOM is used to generate the VQ codebook, the training algorithm implicitly introduces some constraints resulting in the topology preservation property. In this paper we develop a method for exploiting the property of SOM to perform fast and non-exhaustive search of the codebook. A SOM with 2-dimensional lattice is trained with training images. The weight vectors corresponding to the network nodes are used as the codevectors. Thus a code index can be expressed by a pair of (x, y) co-ordinates of the corresponding node on the lattice. For the image data, due to the topology preservation property of SOM, given the index of an already encoded vector, the index corresponding to a vector representing a neighboring block is likely to be found among the codevectors nearby on the SOM lattice. Thus a search restricted to the lattice neighborhood is likely to unearth a good match. The proposed method is designed to exploit this very possibility. Though in this paper we design spatial vector quantizer (SVQ) for image data to demonstrate the power of the proposed methods, they are applicable to vector quantization of any kind of signals. These methods also offer a scope for efficient Huffman coding of the index values. In Section II we outline the codebook design scheme, Section III covers the encoding method in detail and we report the experimental results in Section IV. In this paper we use peak signal to noise ratio (PSNR) as the performance measure of the VQ. Here we work with 256-level gray value images only. II. C ODEBOOK D ESIGN The codebook is designed by training a two-dimensional SOM with a set of training vectors extracted from a set of training images. After training, the weight vectors corresponding to the nodes of SOM are used as codevectors. Their indexes are expressed by a 2-tuple (x, y) corresponding to the position of the node in the SOM lattice. Next two subsections contain the procedure of preparing the image vectors and a brief description of the SOM algorithm respectively.

A. Preparing the Image Vectors To produce the feature vectors an h × w image is divided into p×q blocks. The pixel values in the p×q image block are presented in row-major fashion to form the feature vector x. Thus, each block is represented by a vector x ∈ Rk , where k = pq and each component xi of x represents the value of (lexicographically ordered) i-th pixel in the block. We convert each vector x to a mean-removed or residual vector ¯1 where x ¯ = k1 Σki=1 xi is the average x such that x = x − x pixel value of the block and 1 is a k-dimensional vector with all elements equal to 1. We use the residual vectors to train the SOM as well as for encoding of the images. The averages are separately transmitted to the decoder and added to the reproduced vectors such that the reconstructed vector ˆ = yj + x x ¯1, where yj is the codevector or reproduction vector corresponding to x. This conversion essentially has the effect of reducing the dimension of the input space by one, i.e., from k to k − 1 and thereby reducing greatly the potential number of clusters. B. Kohonen’s SOM algorithm SOM is formed of neurons located on a regular (usually)1D or 2D grid. Thus each neuron is identified with an index corresponding to its position in the grid (the viewing plane). Each neuron i is represented by a weight vector wi ∈ p where p is the dimensionality of the input space. In t-th training step, a data point x is presented to the network. The winner node with index r is selected as r = arg min{x − wi,t−1 }    i

and wr,t−1 and the other weight vectors associated with cells in the spatial neighborhood Nt (r) are updated using the rule: wi,t = wi,t−1 + α(t)hri (t)(x − wi,t−1 ), where α(t) is the learning rate and hri (t) is the neighborhood kernel (usually Gaussian). The learning rate and the radius of the neighborhood kernel decreases with time. During the iterative training the SOM behaves like a flexible net that folds onto the “cloud” formed by the input data. III. T HE E NCODER Encoding a vector involves searching the codebook to identify the code index corresponding to a codevector having sufficiently good (preferably best) similarity with the input vector. This is followed by a lossless Huffman coding of the index value. Since we are using mean-removed vectors, the mean value is also needed to be transmitted for reconstruction. So the mean value is also Huffman coded. A. Searching the Codebook The proposed method is fashioned after Finite State Vector Quantizers (FSVQ)[2]. FSVQs belong to a more general class of VQs known as recursive or feedback vector quantizers. Given an input sequence of vectors xn , n = 1, 2, · · · , the encoder produces both a set of code indexes un = 1, 2, · · · ,

3307

and a sequence of states Sn = 1, 2, · · · , which determines the behavior of the encoder. A FSVQ is characterized by a finite set of K states S = {1, 2, · · · , K} and state transition function f (u, s). Given the current state s and last code index u, when presented with the next vector x of the input sequence, the FSVQ enters a new state determined by the state transition function f (u, s). Associated with each state s is a smaller codebook Cs , known as state codebook for state s. The search for the codevector for the new vector x is now restricted to Cs , which is much smaller than the full codebook, also known as super codebook, C = s∈S Cs . Obviously, the major challenge in designing a FSVQ involves the design of the state transition function and finding the codebooks corresponding to each state. One of the simplest and popular technique for finding a next-state function is called conditional histogram design [2]. This approach is based on the observation that each codeword is followed almost invariably by one of a very small subset of the available codevectors. This happens due to existence of high degree of correlation between successive vectors in an input sequence. Thus, the performance of the VQ should not degrade much if these small subsets form the state codebooks. So, the training sequence is used to determine the conditional probability p(j | i) of generating the j-the codevector following the generation of i-th codevector. The state codebook for state i of size Ni consists of the Ni codevectors yj with highest values of p(j | i). However, with this design, especially for data outside the training sequence, the optimal codevector may lie outside the state codebook. Thus often a threshold of matching is used. If no codevector with match exceeding the threshold is found in the state codebook, the system is said to ‘derail’ [12]. In such a situation usually an exhaustive search over the super codebook is performed for re-initializing the search process. While using SOM for generation of the codebook, one can associate with each of the codevectors a tuple describing the position of the corresponding node in the SOM lattice. Further, due to topology preservation property of SOM the nodes nearby on the lattice plane encode vectors with higher similarity than the nodes which are located far away on the lattice. Since the input vectors corresponding to two adjacent blocks in the image are likely to be correlated (i.e., expected to have high similarity), the codevectors associated with them are also likely to be associated with nodes those are close to each other on the lattice. Thus, if the codevector for an input vector corresponds to a node c on the lattice, then the codevector for an adjacent block is likely to be within a small neighborhood Nc of c. This property can be exploited to reduce codebook search time. So we define a search window size sh × sw and a quality threshold T . The image vectors to be quantized are prepared as described in Section IIA. For the first vector an exhaustive search is performed and the index in the form of (x1 , y1 ) pair is generated. For the subsequent k-th vector the search is performed among the nodes on the SOM lattice falling within the search window centered at the node (xk−1 , yk−1 )

(we call it the previous index). Due to topology preservation property of SOM and the characteristic similarity between neighboring image blocks, there is a high possibility of finding a good match within the window. If the best match within the window exceeds the threshold T , then the index (xk , yk ) of the corresponding node is accepted; otherwise, rest of the codebook is exhaustively searched to find the best match. Now (xk , yk ) becomes the previous index for k + 1th vector. Thus we can identify the full codebook generated by the SOM as the super codebook of FSVQ, the current states as the predecessor index (xk−1 , yk−1 ), and the state codebook as the set of codevectors within the SOM lattice window of size sh × sw centered at (xk−1 , yk−1 ). There are two issues to be taken care of. The first one concerns the case when the previous index is close to the boundary of the SOM lattice and the window cannot be centered at it. In such a case, we align the edge of the window with the edge of the lattice. The second issue relates to the image blocks at the beginning of a row of blocks, i.e., the blocks at the left edge of the image. Since the image blocks are encoded in a row-major fashion, for other blocks the previous index corresponds to the index for the immediate left block. For the leftmost blocks we use the indexes of the leftmost blocks in the previous row (i.e., the block at the top of current block) as the previous index. There is also a design issue regarding the choice of the window size as well as the quality threshold T . For a given threshold T , smaller window sizes will reduce the window search time, but the instances of derailment will be higher. Again, for a fixed window size, the higher the threshold, the more is the instances of derailment. It is difficult to predict theoretically a suitable choice of these parameters since that will require the knowledge about the distribution of the codebook vectors over the SOM lattice as well as the distribution of the image vectors being quantized. In the ‘Experimental results’ section we have presented some empirical studies regarding the choice of these parameters. Apart from facilitating fast codebook searching, the SOM based method has another advantage. Since the indexes are expressed in terms of 2-tuple values of lattice position and majority of the code indexes are found within a small neighborhood of the previous index, we can express the indexes as 2-tuple of offset values from the previous index. In such a scheme the values in majority of the index tuples are likely to be much smaller than tuples with absolute values. This feature can be exploited for efficient Huffman coding of the indexes. B. Transformation of the indexes for Huffman coding For a codebook generated with an m × n SOM the range of the components of indexes are 0 to m − 1 and 0 to n − 1 respectively. However, due to the topology preservation property, even without restricting the search, neighboring image blocks are mapped into nearby nodes in the SOM lattice. So instead of using the absolute values of the coordinates, if we express the index of a vector in terms of offsets from the previous index, i.e., (xok , yko ) = (xk − xk−1 , yk − yk−1 )

3308

Fig. 1. Grouped histogram of the index offsets for exhaustive search vector quantization of the Lena image

then xok and yko are more likely to have small values. Figure 1 depicts the grouped (x and y values) histograms of the index components expressed as offset values for 512 × 512 Lena image for a VQ using 32 × 32 SOM with 8 × 8 blocks (i.e., 64 dimensional vectors) employing exhaustive search. For restricted search the distribution is expected to have even sharper peak around 0. Clearly coding of indexes in terms of offset values will allow us to perform efficient Huffman coding. However, using offset values stretches the range of index component values from −(m − 1) to (m − 1) and from −(n − 1) to (n − 1) respectively. Hence, we need more code words. We can restrict the range of component values of the offset (xok , yko ) values within 0 to (m − 1) or (n − 1), whichever is greater, if the index values are further transformed into   (xok , yko ) as follows: • • • •

Fig. 2. Histogram of the transformed offset values for exhaustive search vector quantization of the Lena image

IV. E XPERIMENTAL RESULTS In our experiments we have trained a 32 × 32 SOM with training vectors generated from a composite of sixteen 256 level images each of size 256 × 256. The block size used is 8 × 8. Thus the vectors are in 64 dimension and the basic codebook size is 1024. Here we report the test results with three 512 × 512 images Lena, Barbara and Boat. The experimental setup for the results reported here are as follows: • The search window is set to 8 × 8. • The quality threshold T is set at 30 dB PSNR (over a block).



If xok ≥ 0 then xok = xok  Otherwise xok = xok + m  If yko ≥ 0 then yko = yko  Otherwise yko = yko + n

Figure 2 depicts the histogram corresponding to the transformed offsets for the Lena image (corresponding to the index offsets shown in figure 1). The decoder computes the index values (xk , yk ) from   (xok , yko ) as follows: • • • •





If xok ≤ (m − xk−1 ) then xk = xk−1 + xok  Otherwise, xk = xk−1 + (xok − m)   If yko ≤ (n − yk−1 ) then yk = yk−1 + yko  Otherwise, yk = yk−1 + (yko − n)

(a)

(b)

Fig. 3. Reconstructed Lena images. (a) Exhaustive search, (b) Proposed method

In our scheme we are using mean-removed vectors. Thus we need to store/transmit the block average corresponding to each index. An average value can be rounded off to the nearest integer and encoded with a byte value. However, in images substantial correlation exists among the average gray value of the neighboring blocks. So the difference of the average values between neighboring blocks is usually very small. To exploit the feature for efficient Huffman coding, a difference coding scheme of the block averages is developed in [6]. Here we use the scheme for to achieve further compression.

In Table I the search complexity is expressed in terms of the number of codevectors examined during the search procedure. For the exhaustive search, for 4096 blocks in a test image the number of codevectors examined is 4096*1024 = 4194304. For the proposed method the complexity is also expressed as percentage of the number of codevectors examined with respect to the exhaustive search (shown in parenthesis). It is evident from the results that compared to the exhaustive search, the proposed method substantially decreases the search time without any significant sacrifice in the reproduction quality. The method nearly halves the search complexities with negligible decrease in PSNR values for all the images. The compression ratios reported here are the final

3309

TABLE I C OMPARISON OF VQ WITH EXHAUSTIVE SEARCH AND RESTRICTED SEARCHES (S TRATEGIES 1,2 AND THE COMBINED METHOD ). Image

Search method Exhaustive Proposed method Exhaustive Proposed method Exhaustive Proposed method

Lena Barbara

No. of Distance calculations 4194304 1993984 (47.5%) 4194304 2710144 (64.6%) 4194304 2348224 (56.0%)

30

29

25

28.5 Quality of reproduced image measured with PSNR (dB)

No. of distance calculation in % of exhaustive search

Boat

PSNR (dB) 28.95 28.82 24.37 24.34 26.97 26.93

20 15 10 5 0 16

Compression ratio(bpp) 0.227 0.218 0.231 0.227 0.207 0.203

28 27.5 27 26.5 26 25.5 16

14

14

32

12 28

30 10

26

8 20

26 24

6

22 4

28 8

24

6 Search window size (sw=sh)

32

12

30 10

Search window size (sw=sh)

Quality threshold T

(a)

22 4

20

Quality threshold T

(b)

Fig. 4. (a) Variation of number of distance calculation with search window size and quality threshold for quantizing Lena image. (b) Variation of reproduction quality (Measured in PSNR) with search window size and quality threshold for quantizing Lena image.

value with Huffman coding of the transformed index offsets and the difference coded block averages. Figure 3 shows the reproduced Lena image quantized using exhaustive search VQ (a) and the proposed method (b). As mentioned earlier, for the design of the encoder, the choice of the search window size sh × sw and the quality threshold T play an important role. We have conducted an empirical study by designing the VQ with various choices of the search window sizes and quality thresholds, and collected the statistics for quantizing the Lena image. In Figure 4(a) the variation in number of distance calculations for different window sizes and quality thresholds are depicted. It can be observed that for both increase of window size as well as threshold value, the distance computation increases. However, the variation is much less with respect to threshold value compared to the variation with window size. This clearly indicates the strong possibility of finding a good match within a small window. In Figure 4(b) the variation of the reproduction quality is presented. Here it is evident that the quality threshold has more influence on the reproduction quality than the window size. V. C ONCLUSION SOM is usually used as a clustering algorithm for designing VQs. In this work, apart from utilizing the clustering property we exploited the other notable property

of topology preservation to formulate an encoding method with reduced search complexity. The method exploited the topology preservation property by restricting the codebook search to a small window. This strategy is designed in line with the finite state VQs without explicit calculation of state codebooks. However, exhaustive search of the codebook is performed for re-initializations of the encoder. The use of SOM and restricted search combined with suitable transformations of index values and block averages enabled us to apply Huffman encoding to enhance the compression ratio without compromising reproduction quality. Choice of two design parameters, “the search window size” and the “quality threshold” influence the computational load of the encoder significantly. However, our empirical study shows that the rate of increase in computational load with increase of quality threshold for a fixed window size is much greater than that when the threshold is kept constant and window size is increased. This indicates that if a match satisfying certain threshold is to be found within the search window, more often than not it is found within a small neighborhood of the previous index. Thus this finding also indicate the suitability of SOM-based method codebook search proposed in this work. The efficiency of the proposed method can be greatly enhanced if the instances of exhaustive search are avoided. This issue is currently under investigation. Though we have reported here the application

3310

of proposed scheme for still images, it can be easily extended to video images, where in addition to spatial correlation, the temporal correlation between blocks in same position in successive frames also can be exploited for greater efficiency in codebook searching. R EFERENCES [1] T. Kohonen, ”The self-organizing map,” Proc. IEEE, vol. 78, no. 9, pp. 1464-1480, 1990. [2] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publishers, Boston,1992. [3] N. M. Nasrabadi and Y. Feng, “Vector quantization of images based upon the Kohonen self-organization feature maps”, Proc. 2nd ICNN Conf., vol 1, pp. 101-108 1988. [4] E. Yair, K. Zager and A. Gersho, “Competitive Learning and Soft Competition for Vector Quantizer Design”,IEEE Trans. Signal Processing, vol. 40, no. 2, pp. 394-309, 1992. [5] C. Amerijckx, M. Verleysen, P. Thissen and J. Legat, “Image Compression by Self-Organized Kohonen Map”, IEEE Trans. Neural Networks, vol. 9, no. 3, pp. 503-507, 1998. [6] A. Laha, N. R. Pal and B. Chanda, “Design of vector quantizer for image compression using self-organizing feature map and surface fitting”, IEEE Trans. on Image Processing, vol. 13 no. 10 pp. 12911303, 2004. [7] P. C. Chang and R. M. Gray, “Gradient algorithms for designing adaptive vector quantizer”, IEEE Trans. ASSP, vol. ASSP-34, pp. 679690, 1986. [8] H. Abut (ed), “Vector Quantization”, IEEE Reprint Collection, IEEE Press, Piscataway, New Jersey, May, 1990. [9] R. M. Gray and D. L. Neuhoff, “Quantization”, IEEE Trans. Information Theory, vol. 44, no. 6, pp. 1-63, 1998. [10] J. Z. C. Lai and Y.-C. Liaw, “Fast-searching algorithm for vector quantization using projection and triangular inequality”, IEEE Trans. on Image Processing, vol. 13 no. 12 pp. 1554-1558, 2004. [11] H. Khalil and K. Rose, “Predictive vector quantizer design using deterministic annealing”,IEEE Trans. Signal Processing, vol. 51, no. 1, pp. 244-254, 2003. [12] A. Czih´o, B. Solaiman, I. Lov´ani, G. Cazuguel and C. Roux, “An optimization of finite-state vector quantization for image compression”, Signal Proc. Image Comm., vol. 15, pp. 545-558, 2000. [13] M. M. Campos and G. A. Carpenter, “S-Tree: self-organizing trees for data clustering and online vector quantization”, Neural Networks, vol. 15 pp. 505-525, 2001.

3311

Accelerated Codebook Searching in a SOM-based ...

folds onto the “cloud” formed by the input data. III. THE ENCODER ..... 1291-. 1303, 2004. [7] P. C. Chang and R. M. Gray, “Gradient algorithms for designing.

206KB Sizes 0 Downloads 362 Views

Recommend Documents

Accelerated Codebook Searching in a SOM-based ...
while performing an exhaustive search in a large codebook ... domains. In the present paper its efficacy is demonstrated with spatial vector quantization of ...

Accelerated decrease in surfactant concentration in the water of a ...
Surfactants are an important group of membranotro- pic pollutants [1, 2]. Higher plants, including aquatic ones, form the basis for phytotechnologies used to purify and remediate natural environment polluted with various agents [3]. Aquatic plants (m

Accelerated decrease in surfactant concentration in the water of a ...
2009. ACCELERATED DECREASE IN SURFACTANT CONCENTRATION. 181 mass), the surface tension (68.7 mN/m) remained sim- ilar to that recorded ...

Practical Fast Searching in Strings
Dec 18, 1979 - With the IBM 360-370 TRT instruction, we would have to code a short loop here to overcome the limited range of 256 bytes. We also note that ...

Meaningful Life in a Multicultural Society (Accelerated Development)
Other Esteem: Meaningful Life in a Multicultural Society (Accelerated Development) For ipad by Philip O. Hwang, Populer books Other Esteem: Meaningful Life ...

Still Searching for a Pragmatist Pluralism - PhilArchive
Michael Sullivan and John Lysaker (hereafter, S&L) challenge our diesis on ... Lewis's restricted nodon of democracy and the benighted soul who objects to.

A Fast String Searching Algorithm
number of characters actually inspected (on the aver- age) decreases ...... buffer area in virtual memory. .... One telephone number contact for those in- terested ...

A Fast String Searching Algorithm
An algorithm is presented that searches for the location, "i," of the first occurrence of a character string, "'pat,'" in another string, "string." During the search operation, the characters of pat are matched starting with the last character of pat

Searching for a competitive edge - Dell
Google Search Appliance to 11th-generation Dell servers, delivering ... Dell PowerEdge servers have met that .... dedicated account team, a solid distribution.

Practical Fast Searching in Strings - Semantic Scholar
Dec 18, 1979 - School of Computer Science, McGill University, 805 Sherbrooke Street West, Montreal, Quebec. H3A 2K6 ... instruction on all the machines mentioned previously. ... list of the possible characters sorted into order according to their exp

Searching for species in haloarchaea
Aug 28, 2007 - Halorubrum from two adjacent ponds of different salinities at a. Spanish saltern and a ... When advantageous new mu- tant alleles sweep to .... recombination between species at an earlier stage (before the last common ...

Searching for a competitive edge - Dell
cuStomer profiLe. Country: United States ... companies are coming to that realization. Still, there is .... engineering team with a list of our technical requirements ...

Implementing a Hardware Accelerated Libretro Core - GitHub
May 10, 2013 - The frontend takes this rendered data and stretches to screen as ... at least as big as declared in max width and max height. If desired, the FBO ...

Codebook SER-PROM-1990.pdf
Codebook SER-PROM-1990.pdf. Codebook SER-PROM-1990.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Codebook SER-PROM-1990.pdf.

Accelerated Decrease in Surfactant Concentration in ...
surface tension should grow up gradually to the level ... different periods of exposure (up to 646 h). RESULTS ... S. McCutcheon, V.A. Nzengung, and D.D. Yifru).

Codebook-Based Opportunistic Interference Alignment - IEEE Xplore
May 9, 2014 - based on the exiting zero-forcing receiver. We first propose a codebook-based OIA, in which the weight vectors are chosen from a predefined ...

Database Searching Examples
access to thousands of online publications, including academic journals, periodicals, ... If you have entered a Boolean string, for example, you'll want to make sure you select ... PDF Full Text​ or ​HTML Full Text​ below an item's title, you k

Searching the Scriptures study
Duplication of copyrighted material for commercial use is strictly prohibited. Committed to Excellence in Communicating Biblical Truth and Its Application. S01.

Searching tracks - Irisa
Jul 16, 2009 - characterized by three data: (i) the probabilities OF the searched object ... tial position and velocity) and the n-dimensional vector. Xo = (q,~,.

Power searching with Google
a short course in being a great internet searcher. Daniel M. Russell. Senior Research Scientist ... Class 5. Checking your facts. Class 6. Putting it all together ...