Hashmind is your visionary blogging companion, seamlessly integrating Hashnode with the power of OpenAI's language capabilities and DALL·E's image generation. With Hashmind, your voice becomes the creative force behind effortless content creation. Dictate, edit, delete, and publish your Hashnode blog posts with the simplicity of spoken words. Elevate your storytelling with DALL·E's visual flair, all within the intuitive realm of Hashmind.
- Voice-Powered Magic: Craft your Hashnode blog posts effortlessly through natural voice interactions. Mindful Editing: Leverage OpenAI's linguistic prowess for real-time grammar checks and stylistic enhancements.
- Visual Brilliance: Infuse your blogs with captivating images generated by DALL·E for a truly immersive experience.
- Seamless Management: Control your blog content effortlessly – create, delete, and edit – all through intuitive voice commands.
- Personalized Guidance: Receive tailored recommendations based on your unique writing style, audience engagement, and emerging content trends.
I need a sticky menu bar at the bottom to act as a gateway to app features / pages:
pages:
- home (voice dictation)
- drafts (list of drafts)
- published (list of published posts)
- settings (settings page)
Elevenlab for TTS, voice id (Freya)
- get voice ids (https://elevenlabs.io/docs/api-reference/get-voices)
const options = {
method: "POST",
headers: {
"xi-api-key": "259509a86a83f5d7a43102bd12779d33",
"Content-Type": "application/json",
},
body: '{"text":"hello benaiah, how are you doing?","voice_settings":{"similarity_boost":0,"stability":0}}',
};
fetch(
"https://api.elevenlabs.io/v1/text-to-speech/jsCqWAovK2LkecY7zXl4",
options
)
.then((response) => response.arrayBuffer()) // Convert the response to an ArrayBuffer
.then((data) => playAudio(data)) // Pass the ArrayBuffer to the playAudio function
.catch((err) => console.error(err));
function playAudio(audioBuffer) {
// Initialize the AudioContext
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
// Decode the audioBuffer into an AudioBuffer
audioContext.decodeAudioData(
audioBuffer,
function (buffer) {
// Create an AudioBufferSourceNode
const source = audioContext.createBufferSource();
// Set the buffer to the decoded AudioBuffer
source.buffer = buffer;
// Connect the AudioBufferSourceNode to the AudioContext's destination (e.g., speakers)
source.connect(audioContext.destination);
// Start playing the audio
source.start(0);
},
function (err) {
console.error(err);
}
);
}
https://www.greataiprompts.com/guide/chatgpt-prompts-styles/
- Conversational and Casual
- Tutorials and Guide
- Informative and Newsy
- Authors Style
- Malcolm Gladwell
- Dan Ariely
- Brené Brown
- Jane Austen
- Gabriel Garcia Marquez
- Seth Godin
you would need to create a separate folder to handle background-jobs as nextjs might not be the ideal approach to handle jobs/queues with bullmq.
or better still, use trigger.dev for long running jobs. (i dont wanna manage my own custom server for this project)
tutorial: https://dev.to/triggerdotdev/creating-a-resume-builder-with-nextjs-triggerdev-and-gpt4-4gmf
It turns out Brave doesn't support speech recognition yet. source: https://stackoverflow.com/questions/74113965/speechrecognition-emitting-network-error-event-in-brave-browser
Create a post
mutation PublishPost($input: PublishPostInput!) {
publishPost(input: $input) {
post {
id
slug
title
subtitle
}
}
}
Update Post
mutation UpdatePost($input: UpdatePostInput!) {
updatePost(input: $input) {
post {
id
slug
title
subtitle
}
}
}
{
"input": {
"title": "Test Post",
"subtitle": "Nothing much here",
"publicationId": "628d5138b4bd016fc9a325b8",
"contentMarkdown": "## Title here \n\n ### Code snippets \n `const a = 24;`",
"slug": "this-is-cool",
"tags": [
{
"id": "56744721958ef13879b94cad"
}
],
"metaTags": {
"title": "",
"description": "",
"image": ""
}
}
}
update post
{
"input": {
"id": "65b15f6705c6776b1b4276a7",
"subtitle": "Updated"
}
}
Prompt to update a specific post
- USER: I need you to update my post with the title "How to build a blog with Hashnode and Next.js". Add a subtitle "This is a test post", add a new section called "usefullness of ai to humanity" and add a new image to the post.
The AI function should be able to detect the following:
- Title
- Subtitle
- New Section (to be added, if not specified, return false)
- User query (the main intent to be executed on user post i.e what should be done to the post, should it be optimize, should it be summarize or what. The query would be feed into the prompt template)
- Cover image (to be added, if not specified, return false)
Creating Post
- Generate cover image
- Generate metadata (title, description, image)
- Generate post content
UPDATING ARTICLE AI should be able to identify what the user wants to update and what the user wants to add to the post.
Possible user request action for update:
- Update title
- Update subtitle
- Update cover image
- Update post content
- Add new section