In the fast-paced world of artificial intelligence, we’re seeing an overwhelming surge of unstructured data (think text, images, audio, and video) that traditional database systems just can’t handle effectively. While conventional databases shine at organizing structured information in neat tables, they struggle to grasp the semantic meaning behind the content. This is where vector databases come into play, revolutionizing how we store, index, and retrieve data for AI applications.
As AI practitioners, we need database solutions that can keep up with the semantic understanding capabilities of today’s machine learning models. Vector databases have become essential infrastructure, enabling everything from advanced semantic search to personalized recommendation systems and multimodal AI applications.
In this article, we’ll dive into:
– What vector databases are and how they differ from traditional databases
– The technical architecture that gives vector databases their power
– Leading vector database solutions and tips for choosing the right one
– Practical applications across various domains
– Strategies for integrating with Large Language Models (LLMs)
– The benefits, limitations, and future directions of vector databases
At their essence, vector databases are specialized systems designed to store, index, and efficiently retrieve high-dimensional vector data. Unlike traditional databases that work with structured tabular data, vector databases focus on vector embeddings which are numerical representations that capture the semantic meaning of content.
You can think of these embeddings as a way to translate human-understandable content into a format that machines can work with mathematically. When we convert a piece of text, an image, or an audio clip into a vector embedding, we create a mathematical representation that retains the semantic relationships within that content.
For instance, in a well-trained embedding space, the vectors for “dog” and “puppy” would be closer together than they would be to “submarine,” reflecting their semantic similarity. These vectors often have hundreds or thousands of dimensions, with each dimension representing some latent aspect of the content’s meaning.
Consider the following Python example:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
sentences = ["This is a dog.", "I have a puppy.", "Submarines operate underwater."]
# Create embeddings (vectors) for each sentence
embeddings = model.encode(sentences)
print(f"Shape of embeddings: {embeddings.shape}")We can see that when we create vector embeddings for these sentences, each sentence is represented by a 384-dimensional vector, with each dimension representing part of the sentence’s meaning.
To grasp why vector databases are a game-changer, let’s compare them with traditional relational database management systems (RDBMS):
| Characteristic | Traditional RDBMS | Vector Databases |
| Data Organization | Tables with rows and columns | Multidimensional arrays (vectors) |
| Schema | Rigid, predefined structure | Greater flexibilty |
| Data Processing | Row by row | Vectorized operations |
| Query Mechanism | Exact matching | Similarity matching |
| Strength | Structured data with clear relationships | High-dimensional, unstructured data |
Traditional databases are built around the idea of exact matches. For example, the query “Find all records where the customer_id is 12345” would return all records where the customer_id is 12345.
Vector databases use the k-nearest neighbours algorithm, which is a similarity search that resturns the k most similar vectors to the query vector. For example, the query “Find content that’s similar to this reference example” would return all records that the most similar to the given example using a similarity metric.
This fundamental shift opens the door to a whole new category of applications that were previously impractical or even impossible.
Cosine Similarity measures how similar two vectors are based on their direction, with values ranging from -1 (exactly opposite) to +1 (exactly the same). It can be calculated using the following formula:
Euclidean Distance measures the straight-line distance between two points in Euclidean space, with values ranging from 0 (identical) to infinity (completely different). It can be calculated using the following formula:
The choice of distance metric really depends on the specific application and the nature of the data being processed. For text applications, cosine similarity is often preferred because it focuses on directional similarity rather than magnitude, while image processing might lean more towards Euclidean distance.
Modern vector databases typically use a four-tier architecture that separates concerns, enhances scalability, and ensures maintainability:
1. Storage Layer: This layer manages the persistent storage of vector data and implements specialized encoding and compression strategies that are optimized for multi-dimensional data.
2. Index Layer: Specialized data structures are maintained to allow efficient performance of similarity searches.
3. Query Layer: This layer processes incoming queries, determines execution strategies, and handles result processing, which often includes caching frequently executed queries.
4. Service Layer: This layer handles client connections and route requests, including implementing security and multi-tenancy features.
This layered approach allows vector databases to deliver high performance while maintaining flexibility and scalability for a variety of workloads.
Vector indexing is perhaps the most critical component that enables vector databases to perform fast similarity searches on massive datasets. Indexes can be hash-based, tree-based, graph-based, and cluster-based, though we will focus on the most common methods.
A Flat Index is a straightforward, brute-force approach that compares the query vector against every vector in the database. This method works well for vectors in isolation or for multi-tenacy cases, but it doesn’t scale well beyond small datasets.
HNSW (Hierarchical Navigable Small World) is a multi-layered graph structure that allows for efficient navigation through the vector space. It connects vectors to create “small worlds,” enabling searches to be performed in logarithmic time. While it may take longer to build, the results are high-quality and the performance is impressive.
Using Python, we can create a conceptual example of HNSW index creation:
import hnswlib
# Create an index with 384 dimensions (match your embedding size)
# 'cosine' space means we're using cosine similarity as our distance metric
# We could also use 'l2' for Euclidean distance
index = hnswlib.Index(space='cosine', dim=384)
# Initialize with parameters that control index structure and quality
# max_elements: Maximum number of vectors to store (pre-allocates memory)
# ef_construction: Controls index quality during construction - higher values mean better recall but slower build times (typical values: 100-500)
# M: Controls number of connections per node - higher means more connections and better accuracy but requires more memory (typical values: 12-64)
index.init_index(max_elements=10000, ef_construction=200, M=16)
# Add vectors to the index with their IDs
# The ID (i) is important for retrieving the original item later
for i, vector in enumerate(embeddings):
index.add_item(i, vector)
# Configure search parameters
# ef: Controls the search quality - higher values mean better recall but slower search speed
# This can be adjusted dynamically based on your requirements (can be different from ef_construction)
index.set_ef(50) # Lower for faster searches, higher for more accurate results
# Example search
# k: Number of nearest neighbors to retrieve
query_vector = embeddings[0] # Using the first vector as an example query
labels, distances = index.knn_query(query_vector, k=5)
print(f"Top 5 most similar vectors to the query: {labels}")
print(f"Their distances: {distances}")IVF (Inverted File Index) is a clustering approach that clusters vectors around centroids, allowing searches to focus only on the clusters nearest to the query vector. This significantly speeds up search times, though it may come at a slight cost to accuracy.
IVF-PQ (Inverted File with Product Quantization) combines IVF with compression techniques. It breaks vectors into smaller subvectors and quantizes each one separately, which reduces memory usage while still maintaining reasonable accuracy.
When choosing an indexing method, you’ll need to weigh several factors:
– Build Time: How long it takes to create the index.
– Query Time: The duration of searches.
– Memory Usage: The amount of RAM the index consumes.
– Recall: The percentage of true nearest neighbors that are found.
The choice of indexing method fundamentally shapes your vector database’s performance characteristics, creating a complex web of tradeoffs:
Accuracy vs. Query Speed:
– Flat Indexes provide 100% recall (perfect accuracy) but with linear time complexity that becomes increasingly slow with larger datasets.
– HNSW delivers near-perfect recall (typically 95-99%) with logarithmic search time, making it excellent for applications where accuracy is critical.
– IVF offers moderately high recall (80-95%) with significant speed improvements by restricting searches to relevant clusters.
– IVF-PQ further improves speed and reduces memory usage, but at the cost of lower recall (60-80%), making it suitable for extremely large datasets where approximate results are acceptable.
Build Time vs. Search Performance:
– Indexes that take longer to build (like HNSW) often deliver better search performance.
– Consider whether your use case is write-heavy (frequent index updates) or read-heavy (mostly search operations).
Memory Usage vs. Recall:
– In-memory indexes like HNSW provide superior performance but require substantial RAM.
– Compressed indexes like IVF-PQ use significantly less memory but sacrifice some recall accuracy.
Static vs. Dynamic Data:
– Some indexing methods (like IVF) require periodic rebalancing as new data is added.
– Others (like HNSW) can accommodate new vectors without complete rebuilding but may see gradual performance degradation.
To illustrate these tradeoffs, consider the following approximate comparison on a dataset of 1 million 128-dimensional vectors. These figures represent typical ranges observed across various implementations rather than benchmarks from a specific study:
| Index Type | Build Time | Query Time | % True Nearest Neighbours from top-k results (Recall@k) | Memory Usage |
| Flat | Near-instant | 100-500ms | 100% | 100% (baseline) |
| HNSW (M=16) | 5-10 mins | 1-5ms | 95-99% | 120-150% |
| IVF (nlist=1000) | 1-2 mins | 10-20ms | 85-95% | 100-110% |
| IVF-PQ | 2-4 mins | 5-10ms | 60-80% | 10-20% |
The optimal indexing strategy often involves combining methods, such as using HNSW for critical, performance-sensitive collections and IVF-PQ for larger, less-critical collections. Many vector databases also support hybrid approaches, like IVF-HNSW, which combine the strengths of multiple indexing methods.
When selecting an indexing method, start by identifying your non-negotiable requirements (minimum acceptable recall, maximum query latency, or memory constraints) and then choose the approach that best optimizes the remaining factors for your specific use case.
As datasets expand to millions or even billions of vectors, exact nearest neighbor searches can become increasingly expensive. The Approximate Nearest Neighbor (ANN) algorithm is a solution that trades perfect recall for significantly improved performance, though it relies on pre-built indexes.
When a search request comes in, ANN algorithms typically:
1. Use the pre-built index to find a subgroup likely containing similar vectors.
2. Apply the chosen distance metric to measure similarity within that subgroup.
3. Sort the results by similarity and return the top-K matches.
You can usually configure the degree of approximation to balance the percentage of exact matches found and search speed.
Vector databases often use compression techniques to minimize memory usage and enhance performance. These techniques are particularly important as collections scale to billions of vectors, where memory efficiency becomes a critical concern.
Product Quantization (PQ) works by decomposing high-dimensional vectors into smaller subvectors. The process follows these steps:
1. Vector Splitting: Each vector is divided into several equal-sized subvectors.
2. Codebook Creation: For each subvector position, a codebook (a set of representative centroids) is created through clustering.
3. Quantization: Each subvector is replaced with the ID of its nearest centroid from the corresponding codebook.
4. Compressed Storage: Instead of storing the original floating-point vector, only the centroid IDs are stored.
To conduct a similarity search with PQ, the system:
1. Splits the query vector into subvectors
2. Computes distances to centroids in each codebook
3. Uses these distances to estimate the full vector distance without decompression
The compression ratio and accuracy can be tuned by adjusting the number of subvectors and the codebook size. Larger codebooks provide better accuracy but reduce compression rates.
Scalar Quantization (SQ) uses a simpler approach:
1. Each floating-point component of a vector (typically 32 bits) is converted to a lower-precision representation (often 8 bits).
2. This is done by establishing a range (minimum and maximum values) for each dimension.
3. The range is divided into equal-sized bins, and each vector component is assigned to the nearest bin.
SQ is less effective than PQ but is computationally simpler and introduces less error for vectors with clearly defined value ranges. It’s often used as a first-pass compression before applying more sophisticated techniques.
Binary Compression techniques transform floating-point vectors into binary strings where similarity can be computed using Hamming distance (counting differing bits). While highly memory-efficient, binary compression generally sacrifices more accuracy than other methods.
When implementing vector compression, consider these tradeoffs:
– Compression Rate vs. Accuracy: Higher compression reduces memory usage but increases distance estimation errors.
– Compression vs. Computation: Some techniques save memory but require more CPU cycles during search.
– Static vs. Dynamic: Some compression methods require recomputing indexes if data distributions change.
These compression methods are particularly crucial for large-scale applications where memory consumption can become a bottleneck, enabling vector databases to manage billions of vectors with reasonable hardware requirements.
The vector database ecosystem has rapidly expanded in recent years, with several standout solutions emerging. Some of the most popular options include:
Pinecone is a fully managed vector database service that emphasizes simplicity and scalability, allowing developers to focus on building applications instead of database management. It excels at handling billions of vectors with low-latency search.
Milvus is an open-source vector database designed for massive scale and high performance. Milvus offers flexibility with support for various indexing algorithms and distance metrics, along with extensive documentation.
Weaviate places a strong emphasis on semantic search capabilities. It features a GraphQL-based query interface and built-in modules for different data types, prioritizing semantic understanding and providing an excellent developer experience.
Other notable options include Qdrant, which focuses on production-ready vector similarity search with robust filtering capabilities, Chroma, a specific library for easy integration with LLM applications, and FAISS, Meta’s library for efficient similarity search, which is often embedded in other solutions.
When selecting a vector database for your project, keep these key factors in mind:
1. Deployment Model: Do you prefer a fully managed service or a self-hosted solution?
2. Scale Requirements: How many vectors do you need to store, and what query volume do you anticipate?
3. Integration Needs: What existing systems will need to interact with your vector database?
4. Budget Constraints: What is your total cost of ownership tolerance?
5. Team Expertise: How steep is the learning curve for your team?
6. Performance Requirements: What are your specific latency and throughput needs?
Every project has unique requirements, so carefully evaluating these factors will help you choose the most suitable vector database for your specific use case.
Traditional keyword searches rely on exact term matching, often missing the intent behind a query. Semantic search leverages vector databases to grasp context and meaning, delivering more relevant results.
For instance, a traditional search for “heart attack symptoms” might overlook documents discussing “myocardial infarction signs,” despite their relevance. Semantic search recognizes the relationship between these concepts and returns both, significantly enhancing result quality.
In Python, a semantic search function might look like this:
def semantic_search(query, documents, model, top_k=5):
"""
Performs semantic search to find the most relevant documents for a query
Parameters:
- query: The search query text
- documents: List of document texts to search through
- model: The embedding model to use for vectorization
- top_k: Number of top results to return (default: 5)
Returns:
- List of tuples (document, similarity_score) ordered by relevance
"""
# Generate embedding for the query
# The model transforms the text into a high-dimensional vector capturing its meaning
query_embedding = model.encode(query)
# Generate embeddings for all documents
# In a production system, document embeddings would typically be pre-computed
# and stored in a vector database rather than computed at query time
document_embeddings = model.encode(documents)
# Calculate similarity scores using cosine similarity
from sklearn.metrics.pairwise import cosine_similarity
similarities = cosine_similarity([query_embedding], document_embeddings)[0]
# Return indices of top_k most similar documents with their similarity scores, sorted in descending order (highest similarity first)
top_indices = similarities.argsort()[-top_k:][::-1]
# Return the actual documents with their similarity scores
results = [(documents[i], similarities[i]) for i in top_indices]
# A production system would also include:
# - Pre-filtering based on metadata
# - Post-processing for diversity
# - Results formatting and highlighting
return resultsRecommendation engines are another powerful application of vector databases. By representing users and items as vectors in high-dimensional space, the system can identify similar users or items and generate personalized recommendations.
When you engage with content on platforms like Netflix or YouTube, your preferences are encoded as vectors. These vectors are then compared to item vectors to find content you’re likely to enjoy, creating the personalized recommendations that drive user engagement.
Vector databases excel at identifying visually similar content by representing images and videos as vectors that capture their visual characteristics. This enables applications such as:
– Reverse image search (e.g., “find products that look like this reference image”)
– Content moderation systems (flagging similar problematic content)
– Visual product search (finding products that match a specific visual style)
One of the most exciting applications is Retrieval Augmented Generation (RAG), which combines vector databases with generative AI models. In RAG systems:
1. A query is converted into a vector embedding.
2. The vector database retrieves relevant documents or passages.
3. This context is fed into a generative model (like GPT-4 or Claude).
4. The model generates a response grounded in the retrieved information.
This approach significantly enhances the factuality and specificity of AI responses by grounding them in relevant information beyond the model’s training data. We will explore this application in more detail in future articles.
Vector databases are essential in the current AI architecture, especially when working with Large Language Models (LLMs). They act as the semantic memory layer, enabling AI systems to:
1. Retrieve relevant information beyond what the model has been trained on.
2. Ground responses in specific knowledge sources.
3. Reduce hallucinations by providing factual context.
4. Maintain consistency in responses over time.
The typical integration process looks like this:
– Convert user queries into vector embeddings.
– Use similarity search to retrieve relevant information from the vector database.
– Enhance the LLM prompt with this retrieved information.
– Generate responses that blend the model’s inherent knowledge with the retrieved context.
Hardware Factors :
– The size of the CPU cache can affect latency during frequent operations.
– RAM speed and capacity determine how quickly vectors can be loaded and processed.
– GPU acceleration can significantly speed up certain vector operations.
– Storage I/O capabilities impact data loading times.
Software Optimization :
– The choice and configuration of the index can greatly affect search performance.
– Batch processing can enhance throughput for multiple simultaneous queries.
– Caching frequently accessed vectors helps reduce latency.
– The selection of distance metrics influences both accuracy and performance.
As your vector collections expand, scaling becomes a crucial consideration. Common strategies include:
1. Horizontal Scaling: Adding more nodes to distribute the workload.
2. Sharding: Partitioning the vector space across multiple servers.
3. Replication: Creating redundant copies for fault tolerance and read scaling.
4. Hybrid Approaches: Combining memory and disk storage for cost-effective scaling.
The best scaling strategy will depend on your specific needs regarding latency, throughput, and budget. Managed solutions like Pinecone simplify much of this complexity, while self-hosted options like Milvus require more hands-on configuration.
Evaluating vector database performance is crucial for ensuring your application meets real-world requirements. Here are some key considerations for effective benchmarking:
Query Performance:
– Queries Per Second (QPS): Measures how many queries your system can handle per second, crucial for high-traffic applications.
– Latency: The time it takes to return results for a single query, typically measured in milliseconds. P95 and P99 latencies are especially important for understanding worst-case scenarios.
– Recall@k: The percentage of true nearest neighbors found in the top-k results compared to what a brute force search would return. Higher recall means more accurate results but often at the cost of speed.
Resource Utilization:
– Memory Usage: How much RAM is consumed by your vector index, which directly affects hosting costs.
– CPU Utilization: The processing power required during search operations.
– Disk I/O: Important for disk-based or hybrid indexes.
There are several approaches to benchmarking vector database performance, including:
1. Representative Dataset: Use a dataset with similar characteristics (dimensionality, distribution) to your production data.
2. Realistic Query Patterns: Test with queries that mimic actual user behavior, including:
– Mixed workloads (reads and writes)
– Batch queries
– Varying query complexity
3. Staged Testing: Test at different data scales and batch sizes to simulate real-world scenarios. This process can be automated using Python:
import time
# Test at different data scales
for dataset_size in [10000, 100000, 1000000]:
# Test with different batch sizes
for batch_size in [1, 10, 100]:
start_time = time.time()
# Run your search operation here
elapsed = time.time() - start_time
qps = batch_size / elapsed
print(f"Dataset size: {dataset_size}, Batch size: {batch_size}, QPS: {qps}")1. Index Configuration Mismatches: Using parameter settings that prioritize build speed over search performance, or vice versa.
2. Ignoring Cold Start Performance: Many applications show different performance characteristics during initial startup versus steady-state operation.
3. Unrealistic Test Data: Using synthetic data that doesn’t reflect the distribution and relationships found in real-world data.
4. Neglecting Scalability Testing: Failing to test how performance degrades as data volume increases.
5. Single-Metric Focus: Optimizing for one metric (like recall) while ignoring others (like latency or memory usage).
By systematically benchmarking your vector database with relevant metrics and realistic scenarios, you can make informed decisions about configuration, scaling, and optimization strategies that balance performance, cost, and accuracy for your specific use case.
Vector databases offer significant benefits for AI applications:
1. Semantic Understanding: They capture relationships and meanings that traditional databases miss, enabling context-aware applications.
2. Efficient Handling of High-Dimensional Data: They are specialized for the types of data modern AI systems generate and consume
3. Performant Similarity Search: Optimized for finding related items based on meaning rather than exact matches.
4. Flexible Data Representation: Capable of handling diverse unstructured data types (text, images, audio) in a unified manner.
5. Scalability: Designed to manage massive vector collections while maintaining performance.
Vector databases also come with notable limitations:
1. Computational Costs: Similarity searches can be resource-intensive, requiring significant processing power and memory, especially as collections grow.
2. Implementation Complexity: Teams may face learning curves, development time, debugging challenges, and ongoing maintenance needs.
3. Cold-Start Problems: New systems might struggle with performance until they are properly tuned and optimized for specific workloads.
4. Balancing Precision and Recall: The approximate nature of ANN searches necessitates careful tuning to find a balance between retrieving all relevant results and maintaining performance.
5. Integration Challenges: Incorporating vector databases into existing systems can be complex and may require significant architectural changes.
Hybrid Search Systems : These systems combine traditional keyword search with vector-based semantic search, allowing them to perform structured queries while simultaneously searching for semantically similar content using vector embeddings.
Multi-Modal Search Capabilities: As AI models increasingly process text, images, audio, and video, vector databases are adapting to handle multiple data types simultaneously, which allows applications to search across different media types with a unified semantic understanding.
Self-Tuning Systems: These databases automatically optimize their configuration based on workload patterns and data characteristics, reducing the need for manual tuning and expertise.
Improved Compression Techniques: Ongoing research is focused on developing more efficient vector compression methods that lower memory requirements while maintaining search accuracy.
If you’re considering implementing a vector database in your AI system, here are some key recommendations:
1. Start with Clear Requirements: Define your scale, performance, and integration needs before selecting a solution.
2. Consider Managed Services for Rapid Prototyping: These services reduce operational overhead and allow for faster time-to-market.
3. Benchmark Multiple Solutions: Performance characteristics can vary significantly between implementations.
4. Invest in Embedding Quality: The effectiveness of your vector database is ultimately limited by the quality of your embeddings.
5. Implement Hybrid Approaches Where Appropriate: Combining vector search with traditional filtering often yields the best results.
Vector databases signify a fundamental shift in how we store, process, and retrieve unstructured data. By capturing semantic relationships in high-dimensional vector spaces, they allow AI applications to understand meaning instead of just matching keywords or exact values.
In our opinion, the benefits of semantic understanding and the efficient similarity search that vector databases enable outweigh the limitations of computational costs and implementation complexity.
[1] Pinecone, “What is a Vector Database?”, https://www.pinecone.io/learn/vector-database/
[2] Thanga Murugan, “Bridging the Gap: A Comprehensive Comparison Between Vector Databases and RDBMS”, LinkedIn, https://www.linkedin.com/pulse/bridging-gap-comprehensive-comparison-vector-rdbms-thanga-murugan-8t00c
[3] Hopsworks, “Similarity Search”, https://www.hopsworks.ai/dictionary/similarity-search
[4] Frank Liu, “What is a Vector Database?”, https://zilliz.com/learn/what-is-vector-database
[5] Weaviate, “Vector Index”, https://weaviate.io/developers/weaviate/concepts/vector-index
[6] The Data Quarry, “Vector Databases: Not All Indexes are Created Equal”, https://thedataquarry.com/blog/vector-db-3
[7] Milvus, “Single Vector Search”, https://milvus.io/docs/single-vector-search.md
[8] DataCamp, “Mastering Vector Databases with Pinecone Tutorial”, https://www.datacamp.com/tutorial/mastering-vector-databases-with-pinecone-tutorial
[9] Reddit, “Vector Database: pgvector vs Milvus vs Weaviate”, https://www.reddit.com/r/LocalLLaMA/comments/1e63m16/vector_database_pgvector_vs_milvus_vs_weaviate/
[10] DigitalOcean, “A Dive into Vector Databases”, https://www.digitalocean.com/community/conceptual-articles/a-dive-into-vector-databases
[11] Won Bae Suh, “Vector Databases: Challenges & Costs”, LinkedIn, https://www.linkedin.com/pulse/23-3-vector-databases-challenges-costs-won-bae-suh-k3u3c
[12] Instaclustr, “Vector Databases Explained: Use Cases, Algorithms, and Key Features”, https://www.instaclustr.com/education/vector-databases-explained-use-cases-algorithms-and-key-features/
[13] Airbyte, “Integrating Vector Databases with LLM”, https://airbyte.com/data-engineering-resources/integrating-vector-databases-with-llm
[14] Milvus, “How Vector Database Performance is Affected by Hardware”, https://milvus.io/ai-quick-reference/how-can-the-performance-of-a-vector-db-be-affected-by-the-hardware-it-runs-on-and-what-role-do-things-like-cpu-cache-sizes-ram-speed-or-presence-of-gpu-acceleration-play-in-benchmark-outcomes
[15] lakeFS, “What is a Vector Database?”, https://lakefs.io/blog/what-is-vector-databases/
[16] Stephen Collins, “Future of Vector Databases”, https://stephencollins.tech/newsletters/future-of-vector-databases
[17] Qdrant, “What Is Qdrant?”, https://qdrant.tech/documentation/overview/
[18] Zilliz, “Scalar Quantization and Product Quantization” https://zilliz.com/learn/scalar-quantization-and-product-quantization
This is a Servoy tutorial on how to optimize code performance. A while back, I had…
This is an object-oriented Servoy tutorial on how to use an object as a cache in…
This is an object-oriented Servoy tutorial on how to use function memoization with Servoy. Function memoization…
This is an object-oriented Servoy tutorial on how to use object-oriented programming in Servoy. Javascript’s core…
This is an object-oriented Servoy tutorial on how to use inheritance patterns in Servoy. I use…
This is an object-oriented Servoy tutorial on how to use prototypal inheritance in Servoy. When…