Amazon S3 Vectors is a durable vector storage solution that can greatly reduce the total cost of uploading, storing,
and querying vectors. S3 Vectors is a cloud object store with native support to store large vector datasets and
provide subsecond query performance. This makes it more affordable for businesses to store AI-ready data at massive scale.This hands-on example walkthrough demonstrates how to use Amazon S3 Vectors with Unstructured. In this walkthrough, you will:
Create an S3 vector bucket.
Create a vector index in the bucket.
Add the contents of one or more source JSON output files that have been generated by Unstructured to the vector index.
Query the vector index against the contents of the source JSON output files that were added.
A set of one or more JSON output files that have been generated by Unstructured and stored somewhere on your local development machine. For maximum compatibility with this example, these files must contain vector embeddings that were
generated by Amazon Bedrock, by using the Titan Text Embeddings V2 (amazon.titan-embed-text-v2:0) embedding model, with 1024 dimensions. To get these files, you will need:
An Unstructured account, as follows:
If you do not already have an Unstructured account, sign up for free.
After you sign up, you are automatically signed in to your new Unstructured Starter account, at https://platform.unstructured.io.
The destination connector for your worklow must generate JSON output files. These include destination connectors for file storage connectors such as
Databricks Volumes, Google Cloud Storage, OneDrive, and S3. Destination connectors for databases such as Elasticsearch, Kafka, and MongoDB, and vector stores such as Astra DB, Pinecone, and Weaviate, do not generate JSON output files.
After your workflow generates the JSON output files, you must copy them from your workflow’s destination location over to some location on your local development machine for access.
Python installed on your local development machine.
With the list of vector buckets showing from the previous step, click the name of the bucket that you just created.
Click Create vector index.
For Vector index name, enter some name for your index.
For Dimension enter the number of dimensions that Unstructured generated for your vector embeddings. For example,
for the Titan Text Embeddings V2 (amazon.titan-embed-text-v2:0) embedding model, enter 1024. If you are not sure how many dimensions to enter,
see your workflow’s Embedder node settings.
Select the appropriate Distance metric for your embedding model. For example, for the
Titan Text Embeddings V2 (amazon.titan-embed-text-v2:0) embedding model, select Cosine. If you are not sure which distance metric to use,
see your embedding model’s documentation.
Expand Additional settings.
Within Metadata configuration, under Non-filterable metadata, click Add key.
For Key, enter text. This allows you to query the vector index by the text field within each object that will be coming over into the index from the JSON output files.
Click Create vector index.
After the vector index is created, copy the value of the index’s Amazon Resource Name (ARN), as you will need it in
later steps. This ARN takes the format arn:aws:s3vectors:<region-id>:<account>:bucket/<bucket-name>/index/<index-name>.
Add the following code to a Python script file in your virtual environment, replacing the following placeholders:
Replace <source-json-file-path> with the path to the directory that contains your JSON output files.
Replace <index-arn> with the ARN of the vector index that you created previously in Step 2.
Replace <index-region-short-id> with the short ID of the region where your vector index is located, for example us-east-1.
Copy
Ask AI
import boto3import osimport jsonimport uuidsource_json_file_path = '<source-json-file-path>'index_arn = '<index-arn>'index_region_short_id = '<index-region-short-id>'s3vectors = boto3.client('s3vectors', region_name=index_region_short_id)num_vectors = 0verbose = True # Set to False to only print final results.# For each JSON file in the source directory...for filename in os.listdir(source_json_file_path): if filename.endswith('.json'): # ...read the JSON file, and then... with open(source_json_file_path + filename, 'r') as f: file = json.load(f) # ...for each object in the Unstructured-formatted JSON array... for object in file: # ...add the object's text and vectors to the S3 vector index. # Use the following format: # { # "key": "<random-uuid>", # "data": { # "float32": <embedding-array> # }, # "metadata": { # "text": "<text-field>" # } # } json_object = {} json_object['key'] = str(uuid.uuid4()) json_object['data'] = {} json_object['metadata'] = {} # If the object has no text, do not add it to the # vector index, and move on to the next object. if 'text' in object: json_object['metadata']['text'] = object['text'] else: if verbose: print(f"Skipping object with source element ID {object['id']} as it has no text") continue # If the object has no embeddings, do not add it to the # vector index either, and move on to the next object. if 'embeddings' in object: json_object['data']['float32'] = object['embeddings'] else: if verbose: print(f"Skipping object with source element ID {object['element_id']} as it has no embeddings") continue # Add the object to the vector index. s3vectors.put_vectors( indexArn=index_arn, vectors=[json_object] ) if verbose: print(f"Added a vector entry and assigned it the internal ID {json_object['key']}. " + f"First 20 characters of the text: {object['text'][:20]}") num_vectors += 1print(f"Added {num_vectors} vector entries to the vector index.")
Run the script to add the JSON output files’ contents to the vector index. Each object in each JSON output file
is added as a vector entry in the vector index.
In your local Python virtual environment, install the numpy library.
Add the following code to another Python script file in your virtual environment, replacing the following placeholders:
Replace <index-arn> with the ARN of the vector index that you created previously in Step 2.
Replace <index-region-short-id> with the short ID of the region where your vector index is located, for example us-east-1.
Replace <sentence-to-embed> with the search text that you want to embed for the query.
Copy
Ask AI
import boto3import jsonimport numpy as npindex_arn = '<index-arn>'index_region_short_id = '<index-region-short-id>'client = boto3.client('s3vectors', region_name=index_region_short_id)# The sentence to embed.sentence = '<sentence-to-embed>'# Generate embeddings for the sentence to embed.model_id = 'amazon.titan-embed-text-v2:0'bedrock = boto3.client('bedrock-runtime', region_name=index_region_short_id)body = {'inputText': f"{sentence}"}json_string = json.dumps(body)json_bytes = json_string.encode('utf-8')response = bedrock.invoke_model( modelId=model_id, body=json_bytes, contentType='application/json', accept='application/json')# Get the embeddings for the sentence and prepare them for the query.response_body = json.loads(response['body'].read().decode())embedding = response_body['embedding']embedding = np.array(embedding, dtype=np.float32).tolist()# Run the query.response = client.query_vectors( indexArn=index_arn, topK=5, queryVector={'float32': embedding})print(f"Original search query: {sentence}")print(f"\nTop 5 results by similarity search...")print('-----')for vector in response['vectors']: response = client.get_vectors( indexArn=index_arn, keys=[vector['key']], returnData=True, returnMetadata=True ) for vector in response['vectors']: print(vector['metadata']['text']) print('-----')
Run the script to query the vector index and see the query results.
Replace <index-arn> with the ARN of the vector index that you created earlier in Step 2.
Replace <index-region-short-id> with the short ID of the region where your vector index is located, for example us-east-1.
Copy
Ask AI
import boto3index_arn = '<index-arn>'index_region_short_id = '<index-region-short-id>'client = boto3.client('s3vectors', region_name=index_region_short_id)num_vectors = 0next_token = Noneverbose = True # Set to False to only print final results.# List all of the vectors in the S3 vector index.# Vectors are fetched by "page", so loop through the index in pages.while True: kwargs = { 'indexArn': index_arn, 'returnData': True, 'returnMetadata': True } if next_token: kwargs['nextToken'] = next_token response = client.list_vectors(**kwargs) for vector in response['vectors']: if verbose: print(f"Found vector entry with internal ID {vector['key']}. " + f"First 20 characters of the text: {vector['metadata']['text'][:20]}") num_vectors += 1 if 'nextToken' in response: next_token = response['nextToken'] else: breakprint(f"Total number of vector entries found: {num_vectors}")
This operation will permanently delete all vector entries in the vector index. This operation cannot be undone.
Replace the following placeholders:
Replace <index-arn> with the ARN of the vector index that you created earlier in Step 2.
Replace <index-region-short-id> with the short ID of the region where your vector index is located, for example us-east-1.
Copy
Ask AI
import boto3index_arn = '<index-arn>'index_region_short_id = '<index-region-short-id>'client = boto3.client('s3vectors', region_name=index_region_short_id)num_vectors = 0next_token = Noneverbose = True # Set to False to only print final results.# Delete all of the vectors in the S3 vector index.# Vectors are deleted by "page", so loop through the index in pages.while True: kwargs = { 'indexArn': index_arn, 'returnData': True, 'returnMetadata': True } if next_token: kwargs['nextToken'] = next_token response = client.list_vectors(**kwargs) for vector in response['vectors']: # Delete each vector by its key. for vector in response['vectors']: if verbose: print(f"Deleting vector entry with internal ID {vector['key']}. " + f"First 20 characters of the text: {vector['metadata']['text'][:20]}") client.delete_vectors( indexArn=index_arn, keys=[vector['key']] ) num_vectors += 1 if 'nextToken' in response: next_token = response['nextToken'] else: breakprint(f"Deleted {num_vectors} vector entries from the vector index.")