Blog - base2Services

Unlocking Enterprise Knowledge with Amazon Kendra and AI

Written by Jared Brook | Nov 4, 2024 6:15:00 PM

Organizations often struggle with information silos, making it challenging for employees to access critical knowledge spread across various systems, slowing down decision-making, and impacting productivity. Recognizing this, we at base2Services took on the issue during our February HackDay, using Amazon Kendra and AI technologies to build a poweful, unified search interface tailored for our internal teams. This solution has since proven valuable, and we are happy to share our learnings with other organizations navigating similar knowledge management challenges.

The Challenge of Distributed Data

Like many organizations, base2Services stores information across multiple platforms, including Confluence, Zendesk, and Slack. This fragmentation can lead to inefficiencies and frustration when team members need to quickly find specific information. Our goal was to develop a solution that could seamlessly query these disparate data sources and provide meaningful answers, not just links to documents.

Harnessing the Power of Amazon Kendra

Amazon Kendra proved to be the ideal foundation for our project. This AI-powered enterprise search service offers several key advantages:

  • Natural Language Processing: Kendra understands the context and intent behind user queries, delivering more accurate and relevant results.
  • Multi-source Integration: We easily connected Kendra to our Confluence, Zendesk, and Slack instances, creating a unified search index.
  • Intelligent Ranking: Kendra's advanced algorithms help surface the most relevant information, improving search efficiency.

Here's how we initialized the Kendra client and performed a Search:


import boto3

INDEX_ID = 'c0ed0b41-2f39-4eca-b99d-b83386c285e4'
kendra_client = boto3.client('kendra')

def extract_context_from_kendra(index_id, prompt):
    results = kendra_client.retrieve(IndexId=index_id, QueryText=prompt)['ResultItems']
    high_confidence_results = []
    medium_confidence_results = []
    for item in results:
        score_confidence = item['ScoreAttributes']['ScoreConfidence']
        if score_confidence in ('HIGH', 'VERY_HIGH'):
            high_confidence_results.append(item['Content'])
        else:
            medium_confidence_results.append(item['Content'])
    result_content = '\n'.join(high_confidence_results) if high_confidence_results else '\n'.join(medium_confidence_results)
    return result_content

 

Enhancing Results with Amazon Bedrock

While Kendra excels at finding relevant documents, we wanted to take our solution a step further. By integrating Amazon Bedrock, a fully managed service offering high-performance foundation models, we were able to transform raw search results into concise, informative answers.

Our application uses Kendra to retrieve relevant documents, then feeds this context into a Bedrock language model. This combination allows us to generate human-like responses to queries, synthesizing information from multiple sources.

Here's how we used Bedrock to generate responses based on Kendra's search results:


import json

bedrock_client = boto3.client(service_name='bedrock-runtime', region_name='us-east-1')

def generate_bedrock_response(message):
    request_body = {
        "prompt": message,
        "maxTokens": 2000,
        "temperature": 0.2,
        "topP": 0.2
    }

    response = bedrock_client.invoke_model(
        body=json.dumps(request_body),
        modelId='ai21.j2-ultra',
        accept='application/json',
        contentType='application/json'
    )

    response_body = json.loads(response['body'].read())
    return response_body['completions'][0]['data']['text']

 

Key Components of Our Solution

In order to build a truly effective knowledge search solution, we focused on implementing the following essential architectural and functional elements:

  1. Kendra Index Creation: We set up a Kendra index and populated it with data from our various repositories.
  2. Custom Search Application: We developed a Python-based application that interfaces with both Kendra and Bedrock.
  3. Intelligent Query Processing: Our solution categorizes search results based on confidence levels and uses this to provide context to the language model.
  4. Flexible Interaction: We created multiple interfaces, including a general query tool and a specialized case study generator.
The interactive chat interface we devloped allows for continuous conversation:

def run_chain(chain, prompt: str, history=[]):
return chain({"question": prompt, "chat_history": history})

chat_history = []
qa = build_chain()
print("Hello! How can I help you?")
print("Ask a question, start a New search: or CTRL-D to exit.")
print(">", end=" ", flush=True)
for query in sys.stdin:
if (query.strip().lower().startswith("new search:")):
query = query.strip().lower().replace("new search:","")
chat_history = []
elif (len(chat_history) == MAX_HISTORY_LENGTH):
chat_history.pop(0)
result = run_chain(qa, query, chat_history)
chat_history.append((query, result["answer"]))
print(result['answer'])
# ... (code to display sources and prompt for next question)

Here's a snippet from our case study generator, which queries both internal and public data sources to generate client-specific case studies:


def generate_case_study(customer):
    search_terms = f"{customer}+AWS"
    prompt = f"Write a Case study for {customer} highlight what base2Services did for them and include a bulleted list of AWS services that {customer} use make sure to only list a service once"

    internal_base2services_documentation = extract_context_from_kendra(INDEX_ID, search_terms)
    public_website_data = extract_context_from_kendra(WEB_INDEX_ID, search_terms)

    message = f"""You are a base2Services support engineer and you need support all base2Services customers only using the information provided in the Internal base2services documentation and public website data

    {prompt}

    Internal base2service documentation:
    {internal_base2services_documentation}

    also see additional information about "CUYSTOMER" from the base2services website:
    {public_website_data}
    """

    return generate_bedrock_response(message)

 

Demonstrating AI Capabilities

Our project showcases several advanced AI capabilities that address the challenges of distributed information management:

  • Cross-platform Search: Users can find information across multiple systems with a single query.
  • Natural Language Understanding: The system interprets complex queries and provides relevant answers.
  • Content Synthesis: By combining Kendra's search capabilities with Bedrock's language models, we generate coherent responses that draw from multiple sources.
  • Customizable Outputs: Our case study generator demonstrates how AI can be used to create tailored content based on specific customer information.

Real-world Impact

The AI-powered search solution that our team built during the Hackday not only highlighted the technological sophistication of our team but also the potential to deliver substantial, practical benefits of:

  • Faster Information Retrieval: Employees can quickly find answers without navigating multiple systems.
  • Improved Decision Making: Access to comprehensive, synthesized information supports better-informed choices.
  • Enhanced Customer Support: The case study generator helps our team quickly compile relevant customer information and AWS service usage.

Looking Ahead

Our HackDay project highlights the exciting potential of combining enterprise search with advanced AI models. As we continue to enhance and expand this solution, we see even greater opportunities for improving knowledge management and information accessibility within our organization. 

By utilizing cutting-edge technologies like Amazon Kendra and Bedrock, base2Services is not just solving current challenges but also positioning itself at the forefront of AI-driven enterprise solutions. This project exemplifies how we can leverage AI to create more intelligent, efficient, and user-friendly information systems for both our team and our clients.

If your organization is facing similar challenges or if you see the potential for implementing a solution like ours, we invite you to reach out.