Supercharge Your Bottom Line TDI: March 20, 2025: Aaron Di Blasi, Sr. PMP, Mind Vault Solutions, Ltd. | Subtitle: Blind and Low-Vision Workshop By Aaron Di Blasi and Dr. Kirk Adams: Copywriting With AI
Release Date: 03/20/2025
Podcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Samuel Levine, Professor of Law & Director, Jewish Law Institute, Touro Law Center In this thought-provoking episode of Podcasts by Dr. Kirk Adams, Kirk sits down with Professor , law professor at , Director of the , and founder of Touro's , to explore why advancing disability inclusion requires more than "laws on the books." Levine shares how his work blends legal analysis with broader cultural and human elements, compassion, storytelling, religion, the arts, and lived experience, because, as he and Kirk discuss, you can't "legislate...
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with John B. Grimes, Survivor Inspiring Resilience, Author, Destiny is Debatable In this candid episode of Podcasts by Dr. Kirk Adams, Dr. Adams talks with about the life-altering night in 1998 when, as a 19-year-old Texas Tech student, Grimes contracted meningococcal disease and woke up in the hospital days later blind, disoriented, and relearning basic functions, walking, talking, swallowing, while also navigating lasting neurological impacts. Grimes explains why he once called himself “ambiguously blind,” describes the role the Texas...
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Jerred Mace, Founder & CEO, OnceCourt In this inspiring episode of Podcasts by Dr. Kirk Adams, Dr. Adams talks with of about how haptic technology can make live sports dramatically more accessible for blind and low-vision fans. Adams shares his own "hands-on" encounters with the OneCourt device, feeling the raised layout of a basketball court and the vibrations of a synced, fast-moving play, and later experiencing baseball through touch by sensing pitch location, ball flight, and baserunners in real time alongside the radio broadcast....
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Sheldon Guy, Director, Women's Athletics, Improve Her Game In this deeply moving episode of Podcasts by Dr. Kirk Adams, Dr. Adams speaks with Sheldon Guy, Director of Women's Athletics with Improve Her Game and, by his account, one of the only blind basketball coaches, about the sudden, life-altering loss of his vision and the raw, real-time process of rebuilding a life. Sheldon recounts how quickly his world shifted, the heartbreak of what that meant for his son, and the moment he reached a breaking point, only to find a reason to keep going...
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Alyssa Dver, Founding CEO, Speaker, Educator, Motivator, Spokesperson, ERG Leadership Alliance In this insightful episode of Podcasts by Dr. Kirk Adams, Dr. Adams sits down with , Founder and CEO of the , to explore how employee resource groups (ERGs) can drive both inclusion and business performance. Alyssa breaks down what ERGs are, why they're different from social clubs, and how volunteer leaders navigate the paradox of doing “extra” work that still has to align with business goals. She and Dr. Adams discuss the current backlash...
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Ssanyu Birigwa, M.S., Co-Founder, Narrative Bridge In this illuminating episode of Podcasts by Dr. Kirk Adams, Kirk shares how a stressful season leading the American Foundation for the Blind and pursuing his PhD led him to the healing work of guest . He recalls powerful half-day sessions in New York that began with reflective writing and moved into energy practices like the hara seven-minute meditation, creating “energy bodies” with the hands, and chakra work. Those tools, which he still uses most mornings, helped him re-center, move...
info_outlinePodcasts By Dr. Kirk Adams
🎙️ Podcasts By Dr. Kirk Adams: Interview with Vanessa Abraham, Speech Language Pathologist In this candid episode of Podcasts by Dr. Kirk Adams, Dr. Adams sits down with speech-language pathologist, author, and ICU survivor to trace her extraordinary arc from clinician to patient and back again. Abraham recounts the rare Guillain-Barré variant that left her paralyzed and voiceless, the disorientation and aftermath of Post-Intensive Care Syndrome, and the painstaking work of reclaiming speech, swallowing, mobility, and identity. She explains why she wrote Speechless, to humanize the...
info_outlinePodcasts By Dr. Kirk Adams
Here Dr. Kirk Adams frames disability inclusion as a hiring advantage powered by one national door and local execution. He spotlights CSAVR's National Employment Team (NET), led by , as a single gateway into every state and territorial public VR agency, with TAP (the Talent Acquisition Portal) and on-the-ground VR specialists turning postings into interviews, OJT, accommodations, and retention. The article walks leaders through why inclusion breaks at the national-to-local seam, how the NET's “one company” model fixes it, and where the ROI shows up—shorter time-to-fill, stronger...
info_outlinePodcasts By Dr. Kirk Adams
In this engaging episode, Dr. Kirk Adams sits down with and of Vispero to explore how AI and JAWS' 30-year legacy are converging to expand employment and independence for people who are blind or low vision. After Kirk shares a personal JAWS origin story from 1995, Liz and Rachel trace their own paths through VR and training, then introduce Freedom Scientific's new "Learn AI" series: live, first-Thursday-at-noon ET webinars that begin with fundamentals (terminology, prompting, hands-on practice) and progress to specific tools, ChatGPT in October, then Gemini and Copilot in November. Each...
info_outlinePodcasts By Dr. Kirk Adams
In this engaging episode, Dr. Kirk Adams sits down with , Founder & Chief Scientist at to unpack how measuring day-to-day workplace experiences, rather than headcounts or vague culture scores, translates inclusion into business outcomes. Gaudiano traces his path from computational neuroscience and complexity modeling to a 2015 “lightbulb moment” that led him to build simulations and tools showing how inclusion lifts productivity and retention, and how focusing on diversity alone can spark backlash. He outlines the premise of his 2024 book Measuring Inclusion: Higher Profits and...
info_outline00:00
Music.
00:09
Welcome to podcasts by Dr Kirk Adams, where we bring you powerful conversations with leading voices in disability rights, employment and inclusion. Our guests share their expertise, experiences and strategies to inspire action and create a more inclusive world. If you're passionate about social justice or want to make a difference, you're in the right place. Let's dive in with your host, Dr Kirk Adams,
00:38
Hello everybody. This is Dr Kirk Adams speaking to you from my home office in sunny Seattle, Washington. And this is a very special edition of my monthly live stream webinar, which I call super charge your bottom line through disability inclusion. And today I have a wonderful guest and a colleague and partner in crime. Aaron Di Blasi, as we work together to accelerate inclusion of people with disabilities in our society, it was with mindful solutions. And Aaron, if you could give me a quick headline of who you are, I will start back to you shortly for more. Hello, everyone. My name is Aaron Di Blasi. I am the Senior Project Management Professional for a digital marketing firm out of Cleveland, Ohio, by the name of Mind Vault Solutions Limited. I am also the publisher for the Top Tech Tidbits, Access Information News, AI weekly, excuse me, and now, Title Two Today, newsletters, if you're familiar with any of those. I also work closely with Dr Adams to do his digital marketing as well.
01:47
Thanks, Aaron. So I got, I became acquainted with Aaron and actually through the apex program, which is www dot the apex program.com which is a virtual training program to launch blind people into cyber security. And I had connected with Aaron around that, and he
02:14
helped us promote the program through his publications. And as our relationship deepened, Aaron said things like, you should start a podcast, you should have a YouTube channel.
02:27
Yeah, you should write blogs. You need a, you need a, yeah. The difference is, you listen. No one else listens. Though, you listened every time. Seriously, kudos. Really. You need a website that's more focused on your overall brand, yeah, and so, and he listens, yeah. We piece by piece, we've been building this web presence, and part of this is generating content. And for those who don't know me, just super brief again, I'm Dr Kirk Adams. I'm a blind person. Have been since age five. My retina is detached. I went to a school for blind kids, first second and third grade in the state of Oregon, learn how to read and write Braille, which I do constantly travel confidently, independently with a long white cane. Learned how to type on a typewriter so I could start public school in fourth grade and type my assignments and spelling tests and things for sighted teachers. And I also was given just this wonderful set of experiences, which gave me a great internal locus of control, just a belief that I could do whatever I wanted to do, and that was largely through outdoor experience. It was in Oregon, we backpacked and camped in the Three Sisters wilderness area, we
03:43
went up on Mount Hood and build, build big snow forts out of snowball huge snowballs. We went to the Oregon coast in the tide pools. And so I just had that sense of how to love my body as a little blind kid. And was
03:58
given some great gifts there at that school, was the only blind student in all of my classes, from fourth grade through my through my PhD. So also had experiences as a academically high achieving young college student, blind having the challenges of trying to find employment. So I've had those experience, frustrating experiences as a blind job seeker. I've I've also had the privilege of employing many, many hundreds of blind people as
04:31
the president, CEO of the Seattle Lighthouse for the Blind, and American Foundation for the Blind. So employment is my jam and
04:42
and I've also become involved in a number of startup companies using Disability tech to accelerate inclusion. And through all of this, I have
04:55
been generating a lot of content. I've done a lot of writing in my day. And of course.
05:00
Course, as AI became more present in the world, I learned through doing so, I wanted to try to use AI whenever I could ask questions. I started to try to use it to assist in some writing. And then I discovered that Aaron
05:22
had done a lot more with that than I had,
05:26
had a lot more skill and insight than I had, and
05:32
we've generated some content together where I've given him some bare bones and some thoughts and some notes, and he's gonna show some examples of that. Yeah,
05:42
that's okay, yeah, yeah. He's come back and said, you know, what do you think of this? And he, he readily says, you know, I used, I used AI as a tool
05:53
to enhance
05:55
what you sent me. So we, we thought
06:01
that blind people in particular
06:05
could really be using AI to generate content more effectively and efficiently in order for us to all move our personal missions forward. So we thought we would share some knowledge today, and Aaron proposed we do a workshop, and I proposed we do it during my regularly scheduled monthly live stream webinar time. So here we are. Thank you for joining us, yeah, and thanks everyone who's here live, and thank you for all of you out there in the future we're viewing the recording. So Aaron, I'm going to hand you the talking stick, and I'm here to learn from you. So I will probably pop in with with questions from time to time. I hope so, and I have no you will, and I know you will allow time for questions. Oh, for sure, from our audience, very good.
07:01
All right, everyone. Well, just to top this off, I guess we'll start with the title, blind and low vision workshop, how to generate professional, high quality, high ranking, accurate, long form copy. Sorry for all that, for your personal or business brand using the premium versions of foundational AI models. The reason we included all of those words is because it's very specific to the type of copy that we're going to be talking about today.
07:26
It's very easy. Let's just start with this analogy to open up chat, G, P, T, and say, write me a blog post about dot, dot, dot, and it will okay. But unfortunately, the internet is filled with that kind of I
07:39
don't want to say garbage. A lot of people call it garbage, but it, unfortunately, in SEO terms, is garbage because it doesn't rank well, it doesn't do much for your brand, and it certainly doesn't do much for other people in the marketing field that read it and see that it was just simply generated with one prompt. So what people want to know is, they want to know, how do I get it to not only sound human? I mean, that is kind of the number one concern, I think, is that they want it to sound human, which we cover, that that's no problem. But more than anything, they want the context there that the model might not have. And today we're going to go over how to give the model all of the context that it needs and all of the timing that it needs to basically perform as though AI were five years from now. It's a pretty cool workaround. So we're going to start with an example. Recently, Dr Adams attended CSUN, and while he was in CSUN, he sent me daily summaries using Siri. He just simply transcribed them and sent them to me via text. I collected these texts into a document which was basically kind of him writing the article, but it was not in the article form. It was just his thoughts on what had happened, you know, people that he had met. This is, this is correct. These were notes on the fly that I just dictated as a text, right? Exactly. And then we just aggregated choppy, non sequential, thank you, exactly. That's what I want to point out. Yeah. This is not him writing, you know, an article, but by any sense, just kind of reporting what he had discovered. Okay, so now we take that and we're going to make that a piece of what we're going to go over today. Okay,
09:10
just to lay the groundwork for anyone who cannot see, we're going to have 10 steps that we're going to go over, and then we're going to have 10 preliminary things that we're going to describe. So there are 10 preliminaries and then 10 steps to the process. A lot of people start the process at write me a blog post about. Our number 10 step is going to be write me a blog post about so we're going to talk about what those other nine steps actually are. Okay, but before we do that, we're going to open up with just a couple of things, software tools that we're going to use in this course of using Google Chrome, which is just a web browser. I'll be using chat GPT, plus the $20 a month version, not the $200 a month version, nothing we need there currently. And I'll also be using Microsoft Word and notepad. Those are just for data. The purpose of Microsoft Word right now. And a lot of people use just pure text and say, why would you use.
10:00
Word because we have belief that in the future, in the very near future, llms will be able to not only parse the text in the Word document, but the hyperlinks attached to the text in the Word document, and that could make for some very serious contextual advantages in the future. So I would recommend you use Microsoft Word, where you can to store your quote, unquote databases, because they will have links attached to them that would be lost if you were to use text. Makes sense to everybody, okay. Number two, the computer platform that I'm using for the courses, windows, 11, desktop, but obviously you could use Mac. You could do this on mobile. You know, it works on on any platform that you can use an LLM.
10:39
Number three, what is a foundation model?
10:42
Just to clear this up, a foundation model is the context of artificial intelligence and machine learning at large scale that is trained on extensive data sets and serves as a base for various downstream tasks. Think chat, GPT, basically. Think Gemini, think Claude. These are major foundational models. They are closed. They're not open source. We'll get into that later. That's not really important right now, but these are the frontier models. These are the models that are currently behaving the best. They are giving us the highest quality. And of those three models, in our testing, we have found that chat, GPT, believe it or not, is the best writer. I know that a lot of people currently believe that Claude is the best writer, but I think what they're talking about is writing in the form of once upon a time, as in writing a story from scratch and keeping the narrative going. Claude does very well with that, for some reason, but chat GPT seems to do much better at original and creative writing, especially the kind that we're going to be doing today. So that's why we chose it. If anyone has any questions about that. So the competing versions would be Gemini for $20 a month and anthropic Claude for $20 a month. They'll all do the same thing, but these techniques that we're going to use today can also be used on the free version. You'll just have limits to how many
11:52
chats you can use. Okay, so moving on to number four. This is one of the most important contexts here. So listen up to this one, if you can what is a rolling context window.
12:04
The rolling context window allows the model to process and consider a fixed maximum number of tokens at any given moment in the session. This is the model's memory, essentially. And when you say, write me a blog post about Dot. Dot, dot, the model can only give you back results within this window, the single window, and that's why these posts are always small. They're always roughly 4096 characters or less, because that's the window. They're very easy to identify as AI generated, okay, the stuff we're going to do today. Aaron, this Yes. Kirk, so how do I know how big that window is? How much? Great question. Buy it off. Gotcha? Because I have noticed that sometimes, if I've asked,
12:48
take these three documents and combine it, you know, summarize it, it'll tail off toward the end and it won't all the words won't be there, won't be dramatically Correct. Okay, that's well, that there's a number of reasons that can happen that sounds more like a cutoff assumption. I've never experienced anything quite like that. When summarizing, let's go back to the first question. Yeah, no, that's okay, yeah,
13:15
gotcha. So here's some frame of reference for you, and let me give you the ones that won't matter first. Okay, 128,000
13:21
token rolling context window. That means nothing to most people. That's okay. We're going to convert that into Word document pages at 12 point font, just to give people an idea, because that's how most people parse things. Okay? That is roughly 205 pages of Word document at 12 point font. Now that's its entire memory throughout the entire conversation, not just one ask. So that is the maximum amount of context that it can hold in its mind while it's talking to you. The vast majority of people only use about 1% of this memory, but today we're going to use almost 100%
13:58
if that makes sense.
14:00
Any questions from you?
14:03
No, okay, okay. I can relate to Okay. Claude and Gemini have different rolling context windows, and they're larger, okay? And this can really benefit you if you are trying to put something together that has very large context. And we'll explain what that means once we get into the 10 steps. But just for informational purposes. Anthropic Claude is 200,000
14:25
token rolling context window, which is roughly 320 pages in a Word document for frame of reference. Now Google, Gemini, which is one of our favorite for research, is a 1.5 million token rolling context window, which is roughly 2400 pages of a Word document in a single conversation. That's why we tend to use it for very large research projects, because of its large text window, if that makes sense, but chat GPT gives better quality and it's much smaller context window. The larger the context window gets, the harder it becomes for the model to remember all of the context so the looser the answers become.
14:59
Okay?
15:00
So moving on to number six. What is a single chat context window? Now this is this is the limit. This is what everyone cannot get around. The single chat context window refers to the span of tokens that a language model can process in a single chat interaction, encompassing both the input tokens, which includes your prompt or query, and the output tokens the models response. It defines the amount of information the model can, quote, remember or consider at any given moment in a conversation. This amount is 4096 tokens. Comes out to about six pages in a Word document. So anytime you speak to chat GPT, you only have roughly six pages in and out what it's going to say to you and what you're going to say to it. So essentially, you have three pages because its response is generally 50% of the token output. So anything more than three pages in a single conversation, and it's going to truncate, it's going to give you issues, which may be one of the things that you described earlier when you uploaded a bunch of documents. And I think, yeah, we're going to get to how you can get around that, though, you know, with large documents, I'll show you a different way that you can do it. Okay, so number eight, we're getting close to the end here. Why the limit, the 4096 token limit that is returned in a single response, and this is across models, chat, GPT, Gemini and Claude, is likely to be capped at this value for practical reasons such as computational efficiency, user interface usability and avoiding overly large outputs that are hard to handle in one go. In short, this limit is not necessary, but it's currently here just to keep people in check, so to speak. So that is a limit that we are going to get around today, and we're going to teach people how to create entire documents that are cohesive, that are far more than 4096 tokens, which is all that it can return at one chat, if that makes sense. And we'll show you how to get around that. Okay, number nine, research tool examples for aggregating and assembling the data corpus for each subject. We're going to talk about this, but there's some tools that we're going to use to assemble data, because we're not going to trust the data that the LLM has. We're going to provide it with all the data ahead of time, and the tools that we use to gather that data is obviously Google search. We do Google, Google search. Perplexity has really come a long way. Notebook, LM, chat, GPT itself, we will ask to generate reports, especially with deep research now, and also we use Google 1.5 well it's 2.0 now. Grow with deep research. We use that as well, and we use these tools basically to generate current informational reports on our subjects or author before we go ahead and create the actual article.
17:26
All that makes sense.
17:30
Yep, following so far. Okay, great. Number 10, this we will cover later, but it's called the pair method. I just want to put this in everyone's brain. Pair, P, A, R, E, stands for Prime, augment, refresh, evaluate, but all you have to remember is prime. Prime is going to be the most important piece of this framework that we're going to use. You also don't have to worry about remembering any of this, because all of it's in a document that we will provide you after the course. So all you have to do is listen, okay, and this moves us on to the actual process for creating an article. And as we do this article, we're going to take the example of Dr Adams recent visit at CSUN,
18:08
and we're going to identify the subjects on the author. So we are ready to get started on the actual process. Do you have any questions before we do DR Adams,
18:17
I want to see how you spun my straw into gold. All
18:22
right. Well, here's how we did it. Okay, step number one of 10, identify the author of the final article that you want to create. This is kind of obvious. In this case, the author is Dr Kirk Adams. Why is that important? Because we have to build a corpus of that author, preferably with text that that author has written themselves. This is not a requirement. You can build an AI persona for someone who has not written their own text, but since Dr Adams has a beautiful dissertation in place and plenty of text that he has written by himself long before AI came along, we aggregate all of that data and we sample from it in order to pull a statistical analysis of the way that Dr Adams writes specifically, and chat GPT does this really well, and it it's a really great workaround, but you identify your author first, and then you put together a corpus about that author, as much information as you have. If it's only a single page Word document, that's all you know about them. Then you start with that. But for Dr Adams, for instance, we have his dissertation. We have probably 300 or 400 pages, you know, on him, I would say probably 200 of which he has written himself. So it's a very good statistical sampling for each article. So I strongly recommend that you don't need it, but it's something that you want to build over time. And as you generate articles for that author, you want to read those articles on the back end, back to that author's corpus for future generations. That makes sense. Okay, so that's step one, we identify the author, and we prepare a corpus of that author, and we set that aside. Step two, we aggregate, assemble and verify the data corpus for the author. We just said that. Step three, identify the number of subjects that your final article will require. Now in this case.
20:00
Case, this is something that you kind of have to do as a human being. You have to go in and see what you know, who did Dr Adams talk about, what are the subjects? So let's pull the subjects out of CSUN. There's CSUN itself. We need to tell the LLM all about CSUN. So we have a database for all about CSUN here at The Vault, and we dropped that in. He mentioned three players at CSUN, Awarewolf Gear, Case for Vision and Top Tech Tidbits. So we need to let the LLM know about all three of those subjects as well. So we prepare a data corpus for Awarewolf Gear. We prepare a data corpus for Case for Vision, and we prepare a data corpus for Top Tech Tidbits, and we drop those in along with the author and CSUN data corpuses. We're almost done. He also mentioned that there was uncertainty from disability organizations. Now this is more an ideal than an actual thing, but it is still something that we can research and give the LLM knowledge about current knowledge about. So for this, we went to Gemini growth deep research and asked it as of today to give us the most current research on the current state of uncertainty from disability organizations around the current political climate, di etc, and it generated a beautiful report on the current state of what's going on. We use that as a corpus and drop that in along with CSUN, a werewolf gear case for vision top tickets, and CSUN and the author corpus for Dr Adams. So we have our jaws. Tells me someone would like the definition of the data corpse. Oh, data corpus. Let's just call it a Word document, a database. It used to be SQL. It used to be a data corpus. Today it's just a Word document with data in it. I'm sorry for the extreme word, but it is still a data corpus. Each one is but basically, you're just building a Word document or text file about each subject, and you're saving those files because you're going to use those files as a base for what we're going to do next.
21:45
Sound good? Yep. Okay, on we go, all right, aggregate, assemble and verify the data corpus for each subject. So let's assume that we just did that, and we have all of those files ready on our computer. I know that's a tall order, especially for blind people. I don't want to seem like we're glossing over that, but we do only have an hour, and that's the reason that we're making this the way that we are. But I understand that's a lot of work, and it can take a lot of time, but as a digital marketing agency, when you do this for someone you know, like Dr Adams or a different client, it it definitely pays for itself in the long run, because by the end of it, you'll be able to generate articles much more quickly, much more high quality than even a team of humans could in the same amount of time. So that's the idea. Okay, so step number six, we're going to use the provided role prime prompt. You remember we talked about that earlier. I said you only need to remember prime so that's all we're going to do right now.
22:37
So basically, we're going to take these documents that we have, 1234567,
22:41
documents. We're going to go into chat GPT, and we're going to use a feature called projects. What projects allows us to do in chat GPT is list a number of documents and then go ask questions about those documents. You can get around this by providing the documents directly in the chat as you speak, but you will be limited by the context window that we talked about earlier, you'll run into the issue that Dr Adams described, where it will be cut off if you use projects, it uses a different technology called rag retrieval, augmented generation. We won't get into what that is, but it does not have a limit. Is the point. So you want to take these files, add them to a project, and then in this project chat window, with all of these files in tow. We are going to prime each subject and and the author. We're going to start with the author, and we're going to prime the author, and we're going to say, I'm going to read you the prompt that we're going to provide you. But I call it, tell me everything there is to know about dot, dot, dot. That's what I call it. So in the chat, I type in, tell me everything there is to know about Dr Kirk Adams. That is not the actual prompt. It's very long. I'm going to read it here in a second, but that's the idea. And we press enter, and it goes into this project files, and it sees those 400 pages that we have on Dr Adams, and it pulls this outline, so to speak. Basically what it's doing is it's pulling from the project files all of the information into its memory so that it does not have to hallucinate or guess about who Dr Adams is. In addition, it is sampling his writing style, his statistical, mathematical writing style, so that it can duplicate it later on, when you ask it to Okay, once we've done that for the author in the same chat window, we then proceed to do the same thing for each subject. Tell me everything there is to know about CSUN. And it goes into the project files, and it details everything that it knows about CSUN. And then we when it's done with that, we say, tell me everything there is to know about a werewolf gear. And it goes and it describes everything. Now, mind you, this is all in the same chat. So about Dr Adams, same chat, CSUN. Same chat, a werewolf gear? Same chat, case revision. Same chat, top check tidbits. Same chat, uncertainty from disability organizations, same chat. Now, when we're done with that, we've got all of this context loaded into the models memory. Okay, we're going to move on to I don't think I'm going to read the prompt. It's very long, but you will get it as part of this program, and you can swap out the author or subject in the prompt that you want it to.
25:00
The whole from the corpus, from the database. Okay, so we move on to number seven at this point, once role prime context has been delivered into the chat for the author and each subject and refined by pair as needed. We'll get to that later, because you can refine each subject and author if there's a specific thing that they want to say. For instance, Dr Adams provided a bunch of detail on CSUN, but I didn't decide to use it at this point. We'll use it later. I'll show you where that comes in. Here we are Step seven. Ask chatgpt to provide you with five possible article title examples from author about the final topic that you wish for author to write about. Select one of these five titles for the final article or some combination thereof. This is a wonderful time with all of the context present in the chat. To ask chat GPT for five possible article titles that author would write about said subject, considering all of the context that it knows about author and it knows doctor, yeah, does the choice of the title flavor the article, does it influence? Yes, it does everything. Flavors everything, because it is statistical analysis between words, so even changing the single case of a letter will affect the output. So yes, it does definitely. And the titles that it suggests are definitely in consideration of who you are, what you've written. It's interesting. It really takes into account everything that you've written before as well, which is why we do it at this point in the chat, because it now knows everything about you that makes sense.
26:36
Okay, all right, so at this point
26:41
we it gives us back five titles. We choose one of those titles, okay? Or some combination. Usually, I like a combination of two of the titles. It never it usually does a good job on like the first and the third one. So I'll put those together. And in step number eight, this is where the magic happens. This is where we sidestep that 4096
26:58
context window limit. This is where you get to do what no one else can do, and this is how, and as if you're an academic, you will recognize this shortcut Step eight, use the provided prompt to generate an outline. Listen to that outline for the final article title, copy and paste the final outline into a Word document. Now the chat GPT, at this point will generate an entire outline, and that outline will be within the 4096 character limit. But the great part about it is we're going to refeed that outline in one section at a time and have it write the article for just that section. Now what this allows us to do is make each section of the outline have a 4096 token limit rather than the entire article.
27:44
And this makes a huge, huge difference in your ability to generate long form content. You could literally generate probably a 200 Well, roughly 190 page article, and with context, using this method, so that is kind of the same. So mechanically, you've generated out while you put it in a Word document. Then do you just copy and paste each I'm going to get to that we're going to step up in that. Okay, use the generated outline in tandem with the following prompt. Now we've provided a specific prompt that we use to do this. You can use ours to start, but I encourage you to experiment. But this prompt will do a really good job. Use the generated outline in tandem with the provided prompt to generate the final article one outline section at a time. Do not provide more than one outline section at a time. Do not provide any additional chat or context. And in between providing each outline section for generation, copy each final article section as it is generated and paste it into a separate word document.
28:44
This gives you the final article from chat. GPT, okay. It is at this point that a human being needs to edit that article, review that article. Hallucinations still happen. This is still an LLM, you want to make sure it didn't, you know, say that Dr Kirk Adams used to be president in the United States, or, you know, it can do things like that. So you want to check through we've never had any hallucinations ourselves, but we have to recommend that you look through it and make sure that there are none. This is generally where we provide it to the client. Dr Adams will sign off on it or make any small edits that he wants or likes, which are generally few. And then that article goes into what we call production, gets turned into HTML. SEO is done on it. Keywords are pulled, etc, and we begin publication one. One thing that pops up sometimes is when referring to blind people as a group, it might, might say them, and I will change it to us. We hate that I am also a blind person and part of the community. So that you know that that's one that I've seen more than once, but they are very minor. I haven't seen any I haven't seen anything factual, right, right? Nor have I. But again, I think the way that you get those hallucinations is lack of context, you know? So here we're providing a TON TON TON of context. So the hallucinations drop to this.
30:00
0.000
30:01
point. You know, it's really, it's negligible. So that's kind of the beauty of the of the process, and that's what we use here. That's what we use for you. That's what we use for probably 20 other clients across the world. They're very happy with it. Just know that this may not last long. You know, the llms are changing, you know, in the way that this works may change in the future as well, and just to speak specifically to blind persons, I know how difficult this must be, and if you have any thoughts on how to copy and paste large amounts of text back and forth between documents, I know that there's anchors and all types of other workarounds that we can use. I would love to put something together, an article in the future that helps blind marketing professionals specifically to implement this workflow. So if you have any ideas, please reach out to me or Dr Adams and let us know
30:50
a question or two. Certainly, I also talked about a couple other things in the text I sent you. I talked about AI and how many of the presentations touched upon AI? There was a lot of excitement about AI and accessibility that there's some talk with universal design and incorporating AI into Universal Design that
31:21
add on. Assistive technology may not be needed in the future. There was some excitement around that.
31:29
There was also concern about the bias that is represented by AI, because AI is made up of data points, and people with disabilities use technology less than people without disabilities, so our data of our lived experiences is underrepresented. So you get you get some you get some things.
31:54
You know, Jenny lay flurry,
31:56
Chief Accessibility Officer at Microsoft, David, gave a presentation.
32:01
They are. They are now they have a partnership with Be My Eyes. They're they're now receiving data through the beam. Be my AI usage that they're anonymizing and and aggregating.
32:15
But you know, she, she put up an example of people saying
32:20
to a i I'm blind, and I need to do this, this and this. And, you know, the the AI saying, I'm, I'm sorry to hear you're blind. Think, think that that was, that was an example. So there is concern about bias, and how is AI out? As AI evolves, can be,
32:42
as people with disabilities, be proactive
32:47
in
32:49
addressing that, and
32:53
so any any comments you might have on that, just AI in general, what you're seeing, You're saying you're working with 20 clients using the type of generative process you just talked about.
33:07
I just love your personal take, take on the State of the Union as far as AI and disability and accessibility. Certainly, certainly, I think the big conversation right now that everyone is having is the effect that AI will have on accessibility. There's the will AI eat accessibility conversation, and I think there's a lot of nuance to that conversation.
33:33
I think the one missing element in that conversation is timing, because we say, will AI eat accessibility. I mean, 30 years from now, sure, I think, I don't think anyone would argue that, you know, but a year from now, two years from now, who knows? You know, those are the bets that are current. When's it going to happen? How is it going to happen? What's it going to look like, and specifically, how will it affect blind people? I think a lot of these questions are going to be answered very soon. Amazon plus is coming out, which is going to be huge. It's going to be huge for me, you know, I know it's huge for blind people. It will be the first time an LLM has really been merged with voice in the way that I believe blind people need it to be, you know, kind of like chat GPT, advanced voice mode, you know, which, if you are blind and you have not tried advanced voice mode, I encourage you to rush out today, if you can, and try it, because I think it's life changing. And I think those kinds of things will be life changing. I also think where representation is concerned specifically, and lived experience of people with a disability specifically, we have room for improvement in that once we can collect data from
34:42
people remotely using technology. I mean, I don't know how, but imagine a phone, imagine an apple headset. Imagine if you could, if they could watch Dr Adams work for an hour and specifically tailor solutions to Dr Adams needs. I think that is happening in the future. How far away I can't.
35:00
It 345, years, possibly, maybe sooner. You know, I just think it's a very interesting time. I think it is the greatest time to be alive, and I think it is the most profound time for people with disabilities to have hope in always. You know, from neuro link, which recently released a brain interface that allows a man to use an apple vision Pro to control his environment, you know, I would be incredibly excited about that. If I were one with paralysis, you know, to the breakthroughs of blind and retinal medical technologies that are happening currently, and the promise of that possible technology, then blind people may one day be able to see, you know, and that's not BS, I think is pretty amazing. Personally.
35:45
Another
35:47
aspect of CSUN I commented on as an avid Braille reader was the
35:53
appearance of numerous multi line braille displays. So I have a
36:01
80 character braille display attached to my laptop, the same one I've used since 2016
36:07
I have a 32 character
36:11
Braille tablet note taker,
36:15
basically the same technology I saw when I had my first bursa Braille from tele sensory incorporated in the early 90s, where the data was stored on a cassette tape. The braille display is essentially unchanged. So it's eight there's eight dot Braille cells, and there are mechanical pins that are driven up and down mechanically. And they they fail, they get dirty, they stop working.
36:42
So these multi line braille displays,
36:45
there were four companies there.
36:50
The premise is, you could be more efficient and effective if you could read a 32 character Braille display. I'm looking at one on my lap right now. 12345677,
37:01
words on the line. So the, you know, the Prem, and then I have to toggle to the next line. So the premise would be, could be more efficient and effective, especially with
37:12
computer Braille and mathematical Braille, long equations that that would occupy more lines. And then the other exciting piece is graphics, you know, as a blind kid trying to learn what, what's a parabola? Yeah, yeah.
37:32
And then, you know, there was one cool one where they,
37:36
there was a Braille tablet with 100 characters, and they put up a, you know, illustration of a bicycle, which was cool. I mean, I could tell two wheels, handlebars, it was fun, but they're expensive, right? They're incredibly expensive,
37:54
6000 to Yes, 20 plus $1,000
37:59
so my contention was we need another tactile
38:05
reading system that doesn't reply on these rely on these mechanical pins. And some people listening may remember the Opticon, which I was shown as a second or third grader in the late 60s. And you put your fingers on a little pad, and there were little pins that vibrated. They didn't raise up and down. They they vibrated. And they actually replicated, replicated print letters. So you would scan a print document, and then you would feel the print and some people, still, I know some people love the OPT, yeah, we have, we have people on the list that recently asked about someone that could fix an Opticon. And we were, yeah, yeah. We were trading back and forth black mark. There's a nice black
38:45
but
38:47
I had invited a gentleman, Rafiki Kai, who's a visiting scholar in AI at University of San Francisco Computer Science Department,
38:56
and I was showing him many of these braille displays and talking about the expense. And I got a text from him yesterday saying he's talked to people in the mechanical engineering department at University of San Francisco who are using AI to solve mechanical engineering problems, and he wants to get on a call to see if they can take on developing a new, refreshable breath technology as a project. So a lot of things come out of CSUN. And
39:24
I digress a little bit, just to say you talked about the seven subjects, but there were other things in the article as well, the discussion of AI,
39:34
the discussion of the Braille displays. And for those who haven't read the article, you can think you can find it and talk to top tech tidbits, but
39:46
I think Aaron, if you have any other thoughts that you think are important to share, also, how can people get in touch with you? And I'll let people know how they can get in touch with me.
40:00
And then we'll open up to see if anyone has any questions. Certainly, I think the easiest way that everyone's used to getting in touch with me is just publisher at top tech tidbits.com I have many, many email addresses. They all work. You can reach me at all of them, but that's usually the one that most people use.
40:16
I just basically wanted to answer as many, I'm sorry, yeah, as many questions as I could today, these this workshop was the culmination of questions that I have been asked for about the last two months, you know, from people. So I hope they get a lot of value out of this. We will have this recording posted tomorrow, along with the Word document, which contains all of the prompts that I did not read to you today and everything else that you need to complete the process yourself, and if you have any questions, I might be able to convince Dr Adams to get back on here with me and answer some questions. Yeah, absolutely happy to do part two.
40:52
And we'll probably need to in about three just just to answer questions. Yeah, you probably will change Yeah.
40:59
And to get in touch with me, it's Kirk Adams at Dr Kirk, adams.com Kirk Adams at Dr Kirk, adams.com or LinkedIn.
41:08
And any questions from those of us who are with us live today, and if you're viewing this as an archive recording, please email Aaron and or I, and we'll be happy to engage in dialog and hearing Hearing none I am going to thank you, Aaron, for a very succinct, well organized description of the process you use, and hoping that it will prove of great value to Those who I think view the webinar and looking forward to our next conversation. Indeed. Dr Adams, thanks for all you do. Absolutely talk to you, my friend. All right, take good care. Bye, everybody. Bye, bye.
41:56
Thank you for listening to podcasts by Dr Kirk Adams, we hope you enjoyed today's conversation. Don't forget to subscribe, share or leave a review@www.dr
42:07
Kirk adams.com
42:09
together, we can amplify these voices and create positive change until next time. Keep listening, keep learning and keep making an impact. You.