-
Notifications
You must be signed in to change notification settings - Fork 25
/
Copy pathchatbot.py
1443 lines (1225 loc) · 52.3 KB
/
chatbot.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
import os
import sys
from dotenv import load_dotenv
from datetime import datetime, timedelta
import time
import json
from typing import List, Dict, Any, Optional
import random
import asyncio
import warnings
# Load environment variables from .env file
load_dotenv(override=True)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
warnings.filterwarnings("ignore", message="FP16 is not supported on CPU; using FP32 instead")
current_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(current_dir)
from langchain_core.messages import HumanMessage
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit
from langchain_community.utilities.requests import TextRequestsWrapper
from langchain.tools import Tool
from langchain_core.runnables import RunnableConfig
# Import CDP related modules
from cdp_langchain.agent_toolkits import CdpToolkit
from cdp_langchain.utils import CdpAgentkitWrapper
from cdp_langchain.tools import CdpTool
from pydantic import BaseModel, Field
from cdp import Wallet
# Import Hyperbolic related modules
from hyperbolic_langchain.agent_toolkits import HyperbolicToolkit
from hyperbolic_langchain.utils import HyperbolicAgentkitWrapper
from twitter_langchain import TwitterApiWrapper, TwitterToolkit
from custom_twitter_actions import create_delete_tweet_tool, create_get_user_id_tool, create_get_user_tweets_tool, create_retweet_tool
from github_agent.custom_github_actions import GitHubAPIWrapper, create_evaluate_profiles_tool
# Import local modules
from utils import (
Colors,
print_ai,
print_system,
print_error,
ProgressIndicator,
run_with_progress,
format_ai_message_content
)
from twitter_state import TwitterState, MENTION_CHECK_INTERVAL, MAX_MENTIONS_PER_INTERVAL
from twitter_knowledge_base import TweetKnowledgeBase, Tweet, update_knowledge_base
from langchain_core.runnables import RunnableConfig
from podcast_agent.podcast_knowledge_base import PodcastKnowledgeBase
async def generate_llm_podcast_query(llm: ChatAnthropic = None) -> str:
"""
Generates a dynamic, contextually-aware query for the podcast knowledge base using an LLM.
Uses various prompting techniques to create unique and insightful queries.
Args:
llm: ChatAnthropic instance. If None, creates a new one.
Returns:
str: A generated query string
"""
llm = ChatAnthropic(model="claude-3-5-haiku-20241022")
# Define topic areas and aspects to consider
topics = [
# Scaling & Infrastructure
"horizontal scaling challenges", "decentralization vs scalability tradeoffs",
"infrastructure evolution", "restaking models and implementation",
# Technical Architecture
"layer 2 solutions and rollups", "node operations", "geographic distribution",
"decentralized service deployment",
# Ecosystem Development
"market coordination mechanisms", "operator and staker dynamics",
"blockchain platform evolution", "community bootstrapping",
# Future Trends
"ecosystem maturation", "market maker emergence",
"strategy optimization", "service coordination",
# Web3 Infrastructure
"decentralized vs centralized solutions", "cloud provider comparisons",
"resilience and reliability", "infrastructure distribution",
# Market Dynamics
"marketplace design", "coordination mechanisms",
"efficient frontier development", "ecosystem player roles"
]
aspects = [
# Technical
"infrastructure scalability", "technical implementation challenges",
"architectural tradeoffs", "system reliability",
# Market & Economics
"market efficiency", "economic incentives",
"stakeholder dynamics", "value capture mechanisms",
# Development
"platform evolution", "ecosystem growth",
"adoption patterns", "integration challenges",
# Strategy
"optimization approaches", "competitive dynamics",
"strategic positioning", "risk management"
]
# Create a dynamic prompt that encourages creative query generation
prompt = f"""
Generate ONE focused query about Web3 technology to search crypto podcast transcripts.
Consider these elements (but focus on just ONE):
- Core Topics: {random.sample(topics, 3)}
- Key Aspects: {random.sample(aspects, 2)}
Requirements for the query:
1. Focus on just ONE specific technical aspect or challenge from the above
2. Keep the scope narrow and focused
3. Use simple, clear language
4. Aim for 10-15 words
5. Ask about concrete technical details rather than abstract concepts
Example good queries:
- "What are the main challenges operators face when running rollup nodes?"
- "How do layer 2 solutions handle data availability?"
- "What infrastructure requirements do validators need for running nodes?"
Generate exactly ONE query that meets these criteria. Return ONLY the query text, nothing else.
"""
# Get response from LLM
response = await llm.ainvoke([HumanMessage(content=prompt)])
query = response.content.strip()
# Clean up the query if needed
query = query.replace('"', '').replace('Query:', '').strip()
return query
# Legacy function for fallback
def generate_basic_podcast_query() -> str:
"""Legacy function that returns a basic template query as fallback."""
query_templates = [
"What are the key insights from recent podcast discussions?",
"What emerging trends were highlighted in recent episodes?",
"What expert predictions were made about the crypto market?",
"What innovative blockchain use cases were discussed recently?",
"What regulatory developments were analyzed in recent episodes?"
]
return random.choice(query_templates)
async def generate_podcast_query() -> str:
"""
Main query generation function that attempts to use LLM-based generation
with fallback to basic templates.
Returns:
str: A query string for the podcast knowledge base
"""
try:
# Create LLM instance
llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")
# Get LLM-generated query
query = await generate_llm_podcast_query(llm)
return query
except Exception as e:
print_error(f"Error generating LLM query: {e}")
# Fallback to basic template
return generate_basic_podcast_query()
async def enhance_result(initial_query: str, query_result: str, llm: ChatAnthropic = None) -> str:
"""
Analyzes the initial query and its results to generate an enhanced follow-up query.
Args:
initial_query: The original query used to get podcast insights
query_result: The result/response obtained from the knowledge base
llm: ChatAnthropic instance. If None, creates a new one.
Returns:
str: An enhanced follow-up query
"""
if llm is None:
llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")
analysis_prompt = f"""
As an AI specializing in podcast content analysis, analyze this query and its results to generate a more focused follow-up query.
<initial_query>
{initial_query}
</initial_query>
<query_result>
{query_result}
</query_result>
Your task:
1. Analyze the relationship between the query and its results
2. Identify any:
- Unexplored angles
- Interesting tangents
- Deeper technical aspects
- Missing context
- Potential contradictions
- Novel connections
3. Generate a follow-up query that:
- Builds upon the most interesting insights
- Explores identified gaps
- Dives deeper into promising areas
- Connects different concepts
- Challenges assumptions
- Seeks practical applications
Requirements for the enhanced query:
1. Must be more specific than the initial query
2. Should target unexplored aspects revealed in the results
3. Must maintain relevance to blockchain/crypto
4. Should encourage detailed technical or analytical responses
5. Must be a single, clear question
6. Should lead to actionable insights
Return ONLY the enhanced follow-up query, nothing else.
Make it unique and substantially different from the initial query.
"""
try:
# Get response from LLM
response = await llm.ainvoke([HumanMessage(content=analysis_prompt)])
enhanced_query = response.content.strip()
# Clean up the query
enhanced_query = enhanced_query.replace('"', '').replace('Query:', '').strip()
print_system(f"Enhanced query generated: {enhanced_query}")
return enhanced_query
except Exception as e:
print_error(f"Error generating enhanced query: {e}")
# Return a modified version of the original query as fallback
return f"Regarding {initial_query.split()[0:3].join(' ')}, what are the deeper technical implications?"
# Constants
ALLOW_DANGEROUS_REQUEST = True # Set to False in production for security
wallet_data_file = "wallet_data.txt"
# Create TwitterState instance
twitter_state = TwitterState()
# Create tools for Twitter state management
check_replied_tool = Tool(
name="has_replied_to",
func=twitter_state.has_replied_to,
description="""Check if we have already replied to a tweet. MUST be used before replying to any tweet.
Input: tweet ID string.
Rules:
1. Always check this before replying to any tweet
2. If returns True, do NOT reply and select a different tweet
3. If returns False, proceed with reply_to_tweet then add_replied_to"""
)
add_replied_tool = Tool(
name="add_replied_to",
func=twitter_state.add_replied_tweet,
description="""Add a tweet ID to the database of replied tweets.
MUST be used after successfully replying to a tweet.
Input: tweet ID string.
Rules:
1. Only use after successful reply_to_tweet
2. Must verify with has_replied_to first
3. Stores tweet ID permanently to prevent duplicate replies"""
)
check_reposted_tool = Tool(
name="has_reposted",
func=twitter_state.has_reposted,
description="Check if we have already reposted a tweet. Input should be a tweet ID string."
)
add_reposted_tool = Tool(
name="add_reposted",
func=twitter_state.add_reposted_tweet,
description="Add a tweet ID to the database of reposted tweets."
)
# # Knowledge base setup
# urls = [
# "https://docs.prylabs.network/docs/monitoring/checking-status",
# ]
# # Load and process documents
# docs = [WebBaseLoader(url).load() for url in urls]
# docs_list = [item for sublist in docs for item in sublist]
# # Load and process documents
# docs = [WebBaseLoader(url).load() for url in urls]
# docs_list = [item for sublist in docs for item in sublist]
# text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
# chunk_size=1000, chunk_overlap=200
# )
# doc_splits = text_splitter.split_documents(docs_list)
# text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
# chunk_size=1000, chunk_overlap=200
# )
# doc_splits = text_splitter.split_documents(docs_list)
# vectorstore = SKLearnVectorStore.from_documents(
# documents=doc_splits,
# embedding=OpenAIEmbeddings(model="text-embedding-3-small"),
# )
# vectorstore = SKLearnVectorStore.from_documents(
# documents=doc_splits,
# embedding=OpenAIEmbeddings(model="text-embedding-3-small"),
# )
# retriever = vectorstore.as_retriever(k=3)
# retriever = vectorstore.as_retriever(k=3)
# retrieval_tool = Tool(
# name="retrieval_tool",
# description="Useful for retrieving information from the knowledge base about running Ethereum operations.",
# func=retriever.get_relevant_documents
# )
# retrieval_tool = Tool(
# name="retrieval_tool",
# description="Useful for retrieving information from the knowledge base about running Ethereum operations.",
# func=retriever.get_relevant_documents
# )
# Multi-token deployment setup
# Multi-token deployment setup
DEPLOY_MULTITOKEN_PROMPT = """
This tool deploys a new multi-token contract with a specified base URI for token metadata.
The base URI should be a template URL containing {id} which will be replaced with the token ID.
For example: 'https://example.com/metadata/{id}.json'
"""
class DeployMultiTokenInput(BaseModel):
"""Input argument schema for deploy multi-token contract action."""
base_uri: str = Field(
...,
description="The base URI template for token metadata. Must contain {id} placeholder.",
example="https://example.com/metadata/{id}.json"
)
def deploy_multi_token(wallet: Wallet, base_uri: str) -> str:
"""Deploy a new multi-token contract with the specified base URI."""
"""Deploy a new multi-token contract with the specified base URI."""
if "{id}" not in base_uri:
raise ValueError("base_uri must contain {id} placeholder")
deployed_contract = wallet.deploy_multi_token(base_uri)
result = deployed_contract.wait()
return f"Successfully deployed multi-token contract at address: {result.contract_address}"
def loadCharacters(charactersArg: str) -> List[Dict[str, Any]]:
"""Load character files and return their configurations."""
characterPaths = charactersArg.split(",") if charactersArg else []
loadedCharacters = []
if not characterPaths:
# Load default chainyoda character
default_path = os.path.join(os.path.dirname(__file__), "characters/chainyoda.json")
characterPaths.append(default_path)
for characterPath in characterPaths:
try:
# Search in common locations
searchPaths = [
characterPath,
os.path.join("characters", characterPath),
os.path.join(os.path.dirname(__file__), "characters", characterPath)
]
for path in searchPaths:
if os.path.exists(path):
with open(path, 'r', encoding='utf-8') as f:
character = json.load(f)
loadedCharacters.append(character)
print(f"Successfully loaded character from: {path}")
break
else:
raise FileNotFoundError(f"Could not find character file: {characterPath}")
except Exception as e:
print(f"Error loading character from {characterPath}: {e}")
raise
return loadedCharacters
def process_character_config(character: Dict[str, Any]) -> str:
"""Process character configuration into agent personality."""
# Extract core character elements
bio = "\n".join([f"- {item}" for item in character.get('bio', [])])
lore = "\n".join([f"- {item}" for item in character.get('lore', [])])
knowledge = "\n".join([f"- {item}" for item in character.get('knowledge', [])])
topics = "\n".join([f"- {item}" for item in character.get('topics', [])])
kol_list = "\n".join([f"- {item}" for item in character.get('kol_list', [])])
# Format style guidelines
style_all = "\n".join([f"- {item}" for item in character.get('style', {}).get('all', [])])
adjectives = "\n".join([f"- {item}" for item in character.get('adjectives', [])])
# style_chat = "\n".join([f"- {item}" for item in character.get('style', {}).get('chat', [])])
# style_post = "\n".join([f"- {item}" for item in character.get('style', {}).get('post', [])])
# Select and format post examples
all_posts = character.get('postExamples', [])
selected_posts = random.sample(all_posts, min(10, len(all_posts)))
post_examples = "\n".join([
f"Example {i+1}: {post}"
for i, post in enumerate(selected_posts)
if isinstance(post, str) and post.strip()
])
personality = f"""
Here are examples of your previous posts:
<post_examples>
{post_examples}
</post_examples>
You are an AI character designed to interact on social media with this configuration:
<character_bio>
{bio}
</character_bio>
<character_lore>
{lore}
</character_lore>
<character_knowledge>
{knowledge}
</character_knowledge>
<character_adjectives>
{adjectives}
</character_adjectives>
<kol_list>
{kol_list}
</kol_list>
<style_guidelines>
{style_all}
</style_guidelines>
<topics>
{topics}
</topics>
"""
return personality
def create_agent_tools(llm, twitter_api_wrapper, knowledge_base, podcast_knowledge_base, agentkit, config):
"""Create and return a list of tools for the agent to use."""
tools = []
# Add enhance query tool
tools.append(Tool(
name="enhance_query",
func=lambda initial_query, query_result: enhance_result(initial_query, query_result, llm),
description="Analyze the initial query and its results to generate an enhanced follow-up query. Takes two parameters: initial_query (the original query string) and query_result (the results obtained from that query)."
))
# Create CDP tools
deployMultiTokenTool = CdpTool(
name="deploy_multi_token",
description=DEPLOY_MULTITOKEN_PROMPT,
cdp_agentkit_wrapper=agentkit,
args_schema=DeployMultiTokenInput,
func=deploy_multi_token,
)
# Create Twitter tools
delete_tweet_tool = create_delete_tweet_tool(twitter_api_wrapper)
get_user_id_tool = create_get_user_id_tool(twitter_api_wrapper)
user_tweets_tool = create_get_user_tweets_tool(twitter_api_wrapper)
retweet_tool = create_retweet_tool(twitter_api_wrapper)
# Create toolkits
twitter_toolkit = TwitterToolkit.from_twitter_api_wrapper(twitter_api_wrapper)
cdp_toolkit = CdpToolkit.from_cdp_agentkit_wrapper(agentkit)
hyperbolic_agentkit = HyperbolicAgentkitWrapper()
hyperbolic_toolkit = HyperbolicToolkit.from_hyperbolic_agentkit_wrapper(hyperbolic_agentkit)
toolkit = RequestsToolkit(
requests_wrapper=TextRequestsWrapper(headers={}),
allow_dangerous_requests=ALLOW_DANGEROUS_REQUEST,
)
# Add Knowledge Base Tools based on environment variables
if os.getenv("USE_TWITTER_KNOWLEDGE_BASE", "true").lower() == "true" and knowledge_base is not None:
tools.append(Tool(
name="query_knowledge_base",
description="Query the knowledge base for relevant tweets about crypto/AI/tech trends.",
func=lambda query: knowledge_base.query_knowledge_base(query)
))
if os.getenv("USE_PODCAST_KNOWLEDGE_BASE", "true").lower() == "true" and podcast_knowledge_base is not None:
tools.append(Tool(
name="query_podcast_knowledge_base",
func=lambda query: podcast_knowledge_base.format_query_results(
podcast_knowledge_base.query_knowledge_base(query)
),
description="Query the podcast knowledge base for relevant podcast segments about crypto/Web3/gaming. Input should be a search query string."
))
# Add tools based on environment variables
if os.getenv("USE_CDP_TOOLS", "false").lower() == "true":
tools.extend(cdp_toolkit.get_tools())
if os.getenv("USE_HYPERBOLIC_TOOLS", "false").lower() == "true":
tools.extend(hyperbolic_toolkit.get_tools())
if os.getenv("USE_TWITTER_CORE", "true").lower() == "true":
tools.extend(twitter_toolkit.get_tools())
if os.getenv("USE_TWEET_REPLY_TRACKING", "true").lower() == "true":
tools.extend([check_replied_tool, add_replied_tool])
if os.getenv("USE_TWEET_REPOST_TRACKING", "true").lower() == "true":
tools.extend([check_reposted_tool, add_reposted_tool])
if os.getenv("USE_TWEET_DELETE", "true").lower() == "true":
tools.append(delete_tweet_tool)
if os.getenv("USE_USER_ID_LOOKUP", "true").lower() == "true":
tools.append(get_user_id_tool)
if os.getenv("USE_USER_TWEETS_LOOKUP", "true").lower() == "true":
tools.append(user_tweets_tool)
if os.getenv("USE_RETWEET", "true").lower() == "true":
tools.append(retweet_tool)
if os.getenv("USE_DEPLOY_MULTITOKEN", "false").lower() == "true":
tools.append(deployMultiTokenTool)
if os.getenv("USE_WEB_SEARCH", "false").lower() == "true":
tools.append(DuckDuckGoSearchRun(
name="web_search",
description="Search the internet for current information."
))
if os.getenv("USE_REQUEST_TOOLS", "false").lower() == "true":
tools.extend(toolkit.get_tools())
return tools
async def initialize_agent():
"""Initialize the agent with tools and configuration."""
try:
print_system("Initializing LLM...")
llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")
print_system("Loading character configuration...")
try:
characters = loadCharacters(os.getenv("CHARACTER_FILE", "chainyoda.json"))
character = characters[0] # Use first character if multiple loaded
except Exception as e:
print_error(f"Error loading character: {e}")
raise
print_system("Processing character configuration...")
personality = process_character_config(character)
# Create config first before using
config = {
"configurable": {
"thread_id": f"{character['name']} Agent",
"character": character["name"],
"recursion_limit": 100,
},
"character": {
"name": character["name"],
"bio": character.get("bio", []),
"lore": character.get("lore", []),
"knowledge": character.get("knowledge", []),
"style": character.get("style", {}),
"messageExamples": character.get("messageExamples", []),
"postExamples": character.get("postExamples", []),
"kol_list": character.get("kol_list", []),
"accountid": character.get("accountid")
}
}
print_system("Initializing Twitter API wrapper...")
twitter_api_wrapper = TwitterApiWrapper(config=config)
print_system("Initializing knowledge bases...")
knowledge_base = None
podcast_knowledge_base = None
# Twitter Knowledge Base initialization
if os.getenv("USE_KNOWLEDGE_BASE", "true").lower() == "true":
while True:
init_twitter_kb = input("\nDo you want to initialize the Twitter knowledge base? (y/n): ").lower().strip()
if init_twitter_kb in ['y', 'n']:
break
print("Invalid choice. Please enter 'y' or 'n'.")
if init_twitter_kb == 'y':
try:
knowledge_base = TweetKnowledgeBase()
stats = knowledge_base.get_collection_stats()
print_system(f"Initial Twitter knowledge base stats: {stats}")
while True:
clear_choice = input("\nDo you want to clear the existing Twitter knowledge base? (y/n): ").lower().strip()
if clear_choice in ['y', 'n']:
break
print("Invalid choice. Please enter 'y' or 'n'.")
if clear_choice == 'y':
knowledge_base.clear_collection()
print_system("Knowledge base cleared")
while True:
update_choice = input("\nDo you want to update the Twitter knowledge base with KOL tweets? (y/n): ").lower().strip()
if update_choice in ['y', 'n']:
break
print("Invalid choice. Please enter 'y' or 'n'.")
if update_choice == 'y':
print_system("Updating knowledge base with KOL tweets...")
await update_knowledge_base(twitter_api_wrapper, knowledge_base, config['character']['kol_list'])
stats = knowledge_base.get_collection_stats()
print_system(f"Updated knowledge base stats: {stats}")
except Exception as e:
print_error(f"Error initializing Twitter knowledge base: {e}")
# Podcast Knowledge Base initialization
if os.getenv("USE_PODCAST_KNOWLEDGE_BASE", "true").lower() == "true":
while True:
init_podcast_kb = input("\nDo you want to initialize the Podcast knowledge base? (y/n): ").lower().strip()
if init_podcast_kb in ['y', 'n']:
break
print("Invalid choice. Please enter 'y' or 'n'.")
if init_podcast_kb == 'y':
try:
podcast_knowledge_base = PodcastKnowledgeBase()
print_system("Podcast knowledge base initialized successfully")
while True:
clear_choice = input("\nDo you want to clear the existing podcast knowledge base? (y/n): ").lower().strip()
if clear_choice in ['y', 'n']:
break
print("Invalid choice. Please enter 'y' or 'n'.")
if clear_choice == 'y':
podcast_knowledge_base.clear_collection()
print_system("Podcast knowledge base cleared")
print_system("Processing podcast transcripts...")
podcast_knowledge_base.process_all_json_files()
stats = podcast_knowledge_base.get_collection_stats()
print_system(f"Podcast knowledge base stats: {stats}")
except Exception as e:
print_error(f"Error initializing Podcast knowledge base: {e}")
# Rest of initialization (tools, etc.)
# Reference to original code:
wallet_data = None
if os.path.exists(wallet_data_file):
with open(wallet_data_file) as f:
wallet_data = f.read()
# Configure CDP Agentkit
values = {}
if wallet_data is not None:
values = {"cdp_wallet_data": wallet_data}
agentkit = CdpAgentkitWrapper(**values)
# Save wallet data
wallet_data = agentkit.export_wallet()
with open(wallet_data_file, "w") as f:
f.write(wallet_data)
# Create tools using the helper function
tools = create_agent_tools(llm, twitter_api_wrapper, knowledge_base, podcast_knowledge_base, agentkit, config)
# Add GitHub profile evaluation tool
if os.getenv("USE_GITHUB_TOOLS", "true").lower() == "true":
try:
github_token = os.getenv("GITHUB_TOKEN")
if not github_token:
raise ValueError("GitHub token not found. Please set the GITHUB_TOKEN environment variable.")
else:
print_system("Initializing GitHub API wrapper...")
github_wrapper = GitHubAPIWrapper(github_token)
print_system("Creating GitHub profile evaluation tool...")
github_tool = create_evaluate_profiles_tool(github_wrapper)
tools.append(github_tool)
print_system("Successfully added GitHub profile evaluation tool")
except Exception as e:
print_error(f"Error initializing GitHub tools: {str(e)}")
print_error("GitHub tools will not be available")
# Create the runnable config with increased recursion limit
runnable_config = RunnableConfig(recursion_limit=200)
for tool in tools:
print_system(tool.name)
# Initialize memory saver
memory = MemorySaver()
return create_react_agent(
llm,
tools=tools,
checkpointer=memory,
state_modifier=personality,
), config, runnable_config, twitter_api_wrapper, knowledge_base, podcast_knowledge_base
except Exception as e:
print_error(f"Failed to initialize agent: {e}")
raise
def choose_mode():
"""Choose whether to run in autonomous or chat mode."""
while True:
print("\nAvailable modes:")
print("1. chat - Interactive chat mode")
print("2. auto - Autonomous action mode")
choice = input("\nChoose a mode (enter number or name): ").lower().strip()
if choice in ["1", "chat"]:
return "chat"
elif choice in ["2", "auto"]:
return "auto"
print("Invalid choice. Please try again.")
async def run_with_progress(func, *args, **kwargs):
"""Run a function while showing a progress indicator between outputs."""
progress = ProgressIndicator()
try:
# Handle both async and sync generators
# Handle both async and sync generators
generator = func(*args, **kwargs)
if hasattr(generator, '__aiter__'): # Check if it's an async generator
async for chunk in generator:
progress.stop() # Stop spinner before output
yield chunk # Yield the chunk immediately
progress.start() # Restart spinner while waiting for next chunk
else: # Handle synchronous generators
for chunk in generator:
progress.stop()
yield chunk
progress.start()
if hasattr(generator, '__aiter__'): # Check if it's an async generator
async for chunk in generator:
progress.stop() # Stop spinner before output
yield chunk # Yield the chunk immediately
progress.start() # Restart spinner while waiting for next chunk
else: # Handle synchronous generators
for chunk in generator:
progress.stop()
yield chunk
progress.start()
finally:
progress.stop()
async def run_chat_mode(agent_executor, config, runnable_config):
"""Run the agent interactively based on user input."""
print_system("Starting chat mode... Type 'exit' to end.")
print_system("Commands:")
print_system(" exit - Exit the chat")
print_system(" status - Check if agent is responsive")
# Create the runnable config with required keys
runnable_config = RunnableConfig(
recursion_limit=200,
configurable={
"thread_id": config["configurable"]["thread_id"],
"checkpoint_ns": "chat_mode",
"checkpoint_id": str(datetime.now().timestamp())
}
)
while True:
try:
prompt = f"{Colors.BLUE}{Colors.BOLD}User: {Colors.ENDC}"
user_input = input(prompt)
if not user_input:
continue
if user_input.lower() == "exit":
break
elif user_input.lower() == "status":
print_system("Agent is responsive and ready for commands.")
continue
print_system(f"\nStarted at: {datetime.now().strftime('%H:%M:%S')}")
# Process chunks using the updated runnable_config with async handling
async for chunk in run_with_progress(
agent_executor.astream, # Use astream instead of stream
{"messages": [HumanMessage(content=user_input)]},
runnable_config
):
if "agent" in chunk:
response = chunk["agent"]["messages"][0].content
print_ai(format_ai_message_content(response))
elif "tools" in chunk:
print_system(chunk["tools"]["messages"][0].content)
print_system("-------------------")
except KeyboardInterrupt:
print_system("\nExiting chat mode...")
break
except Exception as e:
print_error(f"Error: {str(e)}")
class AgentExecutionError(Exception):
"""Custom exception for agent execution errors."""
pass
async def run_autonomous_mode(agent_executor, config, runnable_config, twitter_api_wrapper, knowledge_base, podcast_knowledge_base):
"""Run the agent autonomously with specified intervals."""
print_system(f"Starting autonomous mode as {config['character']['name']}...")
twitter_state.load()
# Reset last_check_time on startup to ensure immediate first run
twitter_state.last_check_time = None
twitter_state.save()
# Create the runnable config with required keys
runnable_config = RunnableConfig(
recursion_limit=200,
configurable={
"thread_id": config["configurable"]["thread_id"],
"checkpoint_ns": "autonomous_mode",
"checkpoint_id": str(datetime.now().timestamp())
}
)
while True:
try:
# Check mention timing - only wait if we've checked too recently
if not twitter_state.can_check_mentions():
wait_time = MENTION_CHECK_INTERVAL - (datetime.now() - twitter_state.last_check_time).total_seconds()
if wait_time > 0:
print_system(f"Waiting {int(wait_time)} seconds before next mention check...")
await asyncio.sleep(wait_time)
continue
# Update last_check_time at the start of each check
twitter_state.last_check_time = datetime.now()
twitter_state.save()
# Select unique KOLs for interaction using random.sample
NUM_KOLS = 1 # Define constant for number of KOLs to interact with
selected_kols = random.sample(config['character']['kol_list'], NUM_KOLS)
# Log selected KOLs
for i, kol in enumerate(selected_kols, 1):
print_system(f"Selected KOL {i}: {kol['username']}")
# Create KOL XML structure for the prompt
kol_xml = "\n".join([
f"""<kol_{i+1}>
<username>{kol['username']}</username>
<user_id>{kol['user_id']}</user_id>
</kol_{i+1}>"""
for i, kol in enumerate(selected_kols)
])
thought = f"""
You are an AI-powered Twitter bot acting as a marketer for The Rollup Podcast (@therollupco). Your primary functions are to create engaging original tweets, respond to mentions, and interact with key opinion leaders (KOLs) in the blockchain and cryptocurrency industry.
Your goal is to promote the podcast and drive engagement while maintaining a consistent, friendly, and knowledgeable persona.
Here's the essential information for your operation:
<kol_list>
{kol_xml}
</kol_list>
<account_info>
{config['character']['accountid']}
</account_info>
<twitter_settings>
<mention_check_interval>{MENTION_CHECK_INTERVAL}</mention_check_interval>
<last_mention_id>{twitter_state.last_mention_id}</last_mention_id>
<current_time>{datetime.now().strftime('%H:%M:%S')}</current_time>
</twitter_settings>
For each task, read the entire task instructions before taking action. Wrap your reasoning inside <reasoning> tags before taking action.
Task 1: Query podcast knowledge base and recent tweets
First, gather context from recent tweets using the get_user_tweets() for each ofthese accounts:
Account 1: 1172866088222244866
Account 2: 1046811588752285699
Account 3: 2680433033
Then query the podcast knowledge base:
<podcast_query>
{await generate_podcast_query()}
</podcast_query>
<reasoning>
1. Analyze all available context:
- Review all recent tweets retrieved from the accounts
- Analyze the podcast knowledge base query results
- Identify common themes and topics across both sources
- Note key insights that could inform an engaging tweet
2. Synthesize information:
- Find connections between recent tweets and podcast content
- Identify trending topics or discussions
- Look for opportunities to add unique value or insights
- Consider how to build on existing conversations
3. Brainstorm tweet ideas:
Tweet Guidelines:
- Ideal length: Less than 70 characters
- Maximum length: 280 characters
- Emoji usage: Do not use emojis
- Content references: Use evergreen language when referencing podcast content
- DO: "We explored this topic in our podcast"
- DO: "Check out our podcast episode about [topic]"
- DO: "We discussed this in depth on @therollupco"
- DON'T: "In our latest episode..."
- DON'T: "Just released..."
- DON'T: "Our newest episode..."
- Generate at least three distinct tweet ideas that combine insights from both sources, and follow the tweet guidelines
- For each idea, write out the full tweet text
- Count the characters in each tweet to ensure they meet length requirements
- Use evergreen references to podcast content while staying relevant to current discussions
4. Evaluate and refine tweets:
- Assess each tweet for engagement potential, relevance, and clarity
- Refine the tweets to improve their impact and adhere to guidelines
- Ensure references to podcast content are accurate and timeless
- Verify the tweet adds value to ongoing conversations
5. Select the best tweet:
- Choose the most effective tweet based on your evaluation
- Explain why this tweet best combines recent context with podcast insights
- Verify it aligns with The Rollup's messaging and style
</reasoning>
After your reasoning, create and post your tweet using the create_tweet() function.
Task 2: Check for and reply to new Twitter mentions
Use the get_mentions() function to retrieve new mentions. For each mention newer than the last_mention_id:
<reasoning>
1. Analyze the mention:
- Summarize the content of the mention
- Identify any specific questions or topics related to blockchain and cryptocurrency
- Determine the sentiment (positive, neutral, negative) of the mention
2. Determine reply appropriateness:
- Check if you've already responded using has_replied_to()
- Assess if the mention requires a response based on its content and relevance
- Explain your decision to reply or not
3. Craft a response (if needed):
- Outline key points to address in your reply
- Consider how to add value or insights to the conversation
- Draft a response that is engaging, informative, and aligned with your persona
4. Review and refine:
- Ensure the response adheres to character limits and style guidelines
- Check that the reply is relevant to blockchain and cryptocurrency
- Verify that the tone is friendly and encouraging further discussion