Black Slate http://www.blackslatesoftware.com/ Building Technology Smarter Wed, 01 Nov 2023 21:33:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://i0.wp.com/www.intertech.com/wp-content/uploads/2020/08/cropped-Intertech-512.png?fit=32%2C32&ssl=1 Black Slate http://www.blackslatesoftware.com/ 32 32 207769353 Dynamically Generate PowerPoint Presentations Using PptxGenJS http://www.blackslatesoftware.com/dynamically-generate-powerpoint-presentations-using-pptxgenjs/ Wed, 01 Nov 2023 21:22:19 +0000 http://www.blackslatesoftware.com/?p=257872 The post Dynamically Generate PowerPoint Presentations Using PptxGenJS appeared first on Black Slate.

]]>

Dynamically Generate PowerPoint Presentations Using PptxGenJS

PowerPoint presentations are an essential medium for conveying information and creating impactful visual presentations. However, making and modifying PowerPoints can become a tedious, time-consuming and repetitive task. With pptxgenjs, an innovative JavaScript library, you can programmatically generate PowerPoint presentations with ease.

What is pptxgenjs?

Pptxgenjs is an open-source library that provides an easy-to-use interface for creating and customizing PowerPoint presentations using JavaScript. Pptxgenjs is built with simplicity and flexibility in mind, this library allows developers to dynamically generate high-quality presentations, offering a wide array of features and customization options.

Key Features

  • pptxgenjs uses a simple API allowing presentations to be created with ease. For, just a few lines of code can generate slides, add text, images, shapes, charts, tables, and more.
  • Wide range of customization: Pptxgenjs provides many customization options, allowing you to control the style and formatting of your presentation. Customize the fonts, colors, backgrounds, and transitions to match your brand or presentation theme.
  • Dynamic data incorporation: Pptxgenjs makes it practically effortless to incorporate dynamic data into your presentations. Fetch data from APIs, databases, or any other source and populate your tables, charts and graphs with real-time data.
  • Cross-platform compatibility: The generated PowerPoints are compatible with many platforms, including Windows, macOS, and Linux. You

Getting Started/ Example

1 . Install pptxgenjs using npm or include the library directly in your HTML file.
2. Import the library into your script.

Using npm:
  import pptxgen from 'pptxgenjs';
Or directly in HTML:
<script src="path/to/pptxgen.js"></script>
    3. Create a presentation object, add slides, and customize them as needed.
let pptx = new pptxgen();
let pptTitle = this.getpptTitle(settings);
let date: string = this.getCurrentDate();
pptx.layout = "LAYOUT_WIDE";
pptx.defineSlideMaster({
      title: "MASTER_SLIDE",
      background: { data: betterPlantsPPTimg.betterPlantsSlide },
      margin: 0.0
});
let slide1 = pptx.addSlide();
slide1.background = { data: betterPlantsPPTimg.betterPlantsTitleSlide };
slide1.addText( pptTitle, {
       x: 0.3, 
      y: 2.1,
      w: 5.73,
      h: 1.21, 
      align: 'center', 
      bold: true, 
      color: '1D428A', 
      fontSize: 26, 
      fontFace: 'Arial (Headings)', 
      valign: 'middle', 
      isTextBox: true, 
      autoFit: true 
});
slide1.addText(date, { 
      x: 0.3, 
      y: 4.19, 
      w: 4.34, 
      h: 0.74, 
      align: 'left', 
      color: '8B93B1', 
      fontSize: 20, 
      fontFace: 'Arial (Body)', 
      valign: 'top', 
      isTextBox: true, 
      autoFit: true 
});
    4. Save the presentation.
pptx.writeFile({ fileName: this.fileName + '.pptx' });

The code snippets from above come from the source code of Oak Ridge National Laboratory’s (ORNL) app Manufacturing Energy Assessment Software for Utility Reduction (MEASUR), which can be found at ORNL’s GitHub repository. MEASUR is an open-sourced software suite for increasing the understanding of energy use and potential savings opportunities for industrial and commercial equipment. To learn more, visit MEASUR.ornl.gov.
MEAUSR uses pptxgenjs to generate a PowerPoint presentation of the Treasure Hunt report,
And the lines of code used in the example above produce the title slide of the PowerPoint presentation, as shown below:

Conclusion

With pptxgenjs, MEASUR is savings its users’ hours of work by dynamically generating a PowerPoint report for them, and pptxgenjs can do the same for you. Whether you need to create sales reports, data visualizations, or educational content, this library simplifies the process and offers unmatched flexibility. Try out pptxgenjs and take your presentations to the next level.

To learn more about pptxgenjs and explore its documentation and examples, visit the official GitHub repository: GitHub repository.

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Dynamically Generate PowerPoint Presentations Using PptxGenJS appeared first on Black Slate.

]]>
257872
Get the best performance out of Development Containers on Windows http://www.blackslatesoftware.com/get-the-best-performance-out-of-development-containers-on-windows/ Wed, 07 Jun 2023 14:57:38 +0000 http://www.blackslatesoftware.com/?p=255276 The post Get the best performance out of Development Containers on Windows appeared first on Black Slate.

]]>

Get the best performance out of Development Containers on Windows …or… Troubleshoot Hot Module Reloading (HMR) not working on a NextJS or REACT Project running on a Development Container on Windows

The purpose of this document is to describe how to get the best performance out of a Development Container on Windows or how to troubleshoot the Hot Module Reloading (HMR) not working on a NextJS or REACT Project running on a Development Container on Windows.

Introduction

If you have been dabbling in creating development containers for some of your projects, and you use either create-react-app , vite or create-next-app and your development computer is a Windows computer, you might notice that the Hot Module Reloading feature either doesn’t work at all, or runs very slowly. The reason for this is because of the different file systems that Windows and Linux use. While you are developing on a Linux container, the differences between the host computer and the container can cause a bottleneck or unexpected behavior. The best way to resolve this is to use Windows Subsystem of Linux that is available on Windows 10 and above.

Install a Windows Subsystem Linux (WSL) Distribution

One of the easiest ways to install a WSL Distribution is to open the Microsoft Store, and search for “Ubuntu” and then click on the “Get” button for the version of your choice.

While you are at it. I also recommend downloading the “Windows Terminal” if you don’t already have it (it is not necessary, but a pretty handy tool to be able to have multiple consoles up for different distribution on different tabs).

Upgrade your Distribution to WSL2 (If applicable)

Check the version of WSL that your Distribution is using by executing the following command:

If your distribution is already at 2, then move on, otherwise execute the following command:
wsl.exe --set-version (distro name) 2
To set the default version to be 2 in the future execute the following command:
wsl.exe --set-default-version 2
To set that distribution as your default distribution execute the following command:
wsl --set-default (distro name)

Enable Integration in Docker Desktop

In your docker desktop go to Settings, and then Select Resources and WSL Integration. Ensure that the check box for Enable integration with my default WSL distro is checked.

Bring your project in to your WSL distro, and then run the development container there.

With all of this in place you now have a Linux Distribution running on your windows computer. In a way this WSL container has its own virtual hard drive. You can interact with it by going to \\wsl$\ in windows explorer and you can access the files on your distribution from there.

So, you can copy the files from your application there using drag and drop features of windows explorer.

Another way to get the project files in to your WSL Distribution is to use Linux commands in your Distribution. To access the Linux shell of your WSL either open a command prompt and type “wsl”, or if you have the Windows terminal installed, then it will appear as it’s own tab that you can open here.

Then you can execute the appropriate cp commands to copy files from /mnt/c, to somewhere within your distro like ~/myproject

If your project is on GitHub you can also use the git clone commands within the Linux shell as well.

While yes, you can use the project files within the /mnt/c drive, you will probably still experience some unexpected behavior. So, getting the files within the distribution is where you will get the most performance bang for your buck.

Install the necessary dependencies on to your Distribution (if applicable)

Usually, your dependencies will be in your development container. However, depending on your situation you may need to install dependencies on to your distribution, as your distribution is now the “host” machine.

A common dependency is nodejs. Below are the steps that you could take to install NodeJS in your WSL.

While you’re still in the Linux shell execute the following command:

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
To make nvm available in your path execute the following command:
source ~/.bashrc  
Now that nvm is available you can install the node version that you need for your project. To install the latest long term support execute the following command:
nvm install --lts
Or you can install a specific version by doing something like this:
nvm install 18.16.0

Launch Visual Studio Code from within your WSL

With the way you have everything setup you should have “code” as an executable within your path. To confirm, execute the following command and confirm that you have a path to VS Code/bin in your path:

printenv
Change directories to wherever you copied your files including the .devcontainer folder. And then type code.
When you launch code you will notice that it says that you are connected to wsl
Assuming that your project has a .devcontainer folder and a devcontainer.json file, and the necessary files for your project you should now be able to Rebuild and Reopen in container by pulling up the command pallet with CTRL+SHIFT+P
At this point in time your project is now running in a dev container, but within the WSL, and thus both your docker container, and “host” computer (the WSL instance), are using the same file system, and the performance and features will be more closely to what you expect.

You will even see it in your docker desktop.

Now with that instance of Visual Studio Code open, do a CTRL+SHIFT+` to bring up a command prompt that is on that container.

You should be able to change directories to the project folder and run your dev command such as “npm run dev”, or whatever is appropriate for your project.

You should now notice that the HMR works like you expect.

Example: Create a new Vite + React App

Up to this point we have been discussing how to import an existing project. Let’s talk about starting a new project. After you have the WSL2 Distro installed and ready to go, you can begin to build more applications that use development containers. Here are the steps to create a new Vite + React App using WSL2.

Get in to your Linux shell (either by type wsl in a command prompt, or selecting the tab through the Terminal as mentioned earlier).

Change directories to the place that you would like to store your project files. For this example we are just going to use our home directory (~/), and then create a new directory that labels your project. In this example we will use “my-new-app”. Change directories in to “my-new-app” and execute “code .”

Once in Visual Studio code do a CTRL+SHIFT+P to bring up the command pallet and select the option for “Dev Containers: Add Dev Container Configuration Files”
Then select “Nodejs & TypeScript”
On the next screen, select the version that you would like to use. In this instance, I selected “18” as it is the current LTS as of this writing.

This will create a new .devcontainer/devcontianer.json file for you.

When prompted select the Reopen in Container option.

If you missed that prompt, you could do it by going to the command pallet (CTRL+SHIFT+P) and selecting the “Dev Containers: Rebuild and Reopen in Container” option.
Once the dev container is built you will be in a new instance of Visual Studio code editor.

Do a CTRL+SHIFT+` to bring up a terminal for the development container.

Execute the following command:

npm create vite@latest
I selected the following options.
As it directs, change directories to the vite-project directory, run the npm install, then the npm run dev command.

You should get a notice like this, go ahead and open in Browser.

You should see something like this:
Then edit the vite-project/src/App.tsx to something like this between lines 24 and 26
        <p>
          Hello World! Edit <code>src/App.tsx</code> and save to test HMR
        </p>
After that you should notice this in your terminal.
And this in your application:

Conclusion

Development Containers are great. If you are using Windows, then it is helpful to know that they run best under Windows Subsystem Linux (WSL2). Having the host computer, and the container using the same file system improves performance and the overall developer experience.

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Get the best performance out of Development Containers on Windows appeared first on Black Slate.

]]>
255276
Stop Wasting Your Time: Why Software Developers Must Embrace ChatGPT Now! http://www.blackslatesoftware.com/stop-wasting-your-time-why-software-developers-must-embrace-chatgpt-now/ Mon, 05 Jun 2023 21:41:29 +0000 http://www.blackslatesoftware.com/?p=255263 The post Stop Wasting Your Time: Why Software Developers Must Embrace ChatGPT Now! appeared first on Black Slate.

]]>

Stop Wasting Your Time: Why Software Developers Must Embrace ChatGPT Now!

How AI-powered Language Models like ChatGPT can Revolutionize Software Development

Prompt 1:

I want to write a blog post with the following title and theme: “Stop Wasting Your Time: Why Software Developers Must Embrace ChatGPT Now”

Give me a good subtitle and introductory paragraph about how parts of this article will be written by me and parts will be written by ChatGPT.

Response:

As a software developer, you know that time is your most valuable asset. Every day, you are tasked with solving complex problems, writing and debugging code, and keeping up with the latest technologies. With so much to do, it can be challenging to stay productive and efficient. That’s why it’s time to embrace the power of AI-powered language models like ChatGPT.

In this article, we’ll explore why software developers must start using language models now to improve their workflow, save time, and ultimately build better software. To provide a unique perspective, parts of this article will be written by me, a human software developer, and parts will be written by ChatGPT, an AI-powered language model. Together, we’ll provide a comprehensive overview of how ChatGPT and other language models can help you write better code, collaborate with your team more efficiently, and stay up-to-date with the latest trends in software development. So let’s dive in.

ChatGPT can generate unit tests

Prompt 2:

“Write a short, spirited paragraph about how ChatGPT can write unit tests for code. Assume your reader is a software engineer that hates writing unit tests! This will be my first example in a list of a examples in my blog post of what ChatGPT can do for software engineers.”

Response:

Are you tired of spending countless hours writing unit tests for your code? Well, rejoice, fellow software engineer, because ChatGPT is here to rescue you from the unit test blues! With its AI-powered capabilities, ChatGPT can effortlessly generate unit tests that cover various scenarios, helping you ensure the robustness and reliability of your code. No longer do you have to endure the monotonous task of manually crafting test cases.

ChatGPT can understand your code, identify potential edge cases, and generate test inputs and assertions with lightning speed. Let the AI handle the grunt work while you focus on what you love most—building innovative software solutions. Say goodbye to the drudgery of writing unit tests and embrace the efficiency and convenience that ChatGPT brings to your development process. It’s time to let AI take the wheel and free you from the clutches of test-writing despair!

Prompt 3:

“Make unit tests using jest and react testing library for the following code”

import React from "react";


interface ListOfThingsProps {
 count: number;
 title: string;
 onClick(): void;
}


export const ListOfThings = ({ count, title, onClick }: ListOfThingsProps) => {
 return (
   <>
     <h1>{title}</h1>
     <button onClick={onClick}>Click me</button>
     <ul>
       {Array(count)
         .fill(1)
         .map((_, i) => (
           <li>{i}</li>
         ))}
     </ul>
   </>
 );
};

Response:

import { render, screen, fireEvent } from ‘@testing-library/react’;

import { ListOfThings } from ‘./ListOfThings’;

 

describe(‘ListOfThings component’, () => {

  test(‘renders title and button correctly’, () => {

    const title = ‘Test Title’;

    const onClickMock = jest.fn();

 

    render(<ListOfThings count={5} title={title} onClick={onClickMock} />);

 

    const titleElement = screen.getByText(title);

    const buttonElement = screen.getByText(‘Click me’);

 

    expect(titleElement).toBeInTheDocument();

    expect(buttonElement).toBeInTheDocument();

  });

 

  test(‘renders correct number of list items’, () => {

    const count = 5;

 

    render(<ListOfThings count={count} title=”Test Title” onClick={jest.fn()} />);

 

    const listItems = screen.getAllByRole(‘listitem’);

 

    expect(listItems).toHaveLength(count);

  });

 

  test(‘calls onClick when button is clicked’, () => {

    const onClickMock = jest.fn();

 

    render(<ListOfThings count={5} title=”Test Title” onClick={onClickMock} />);

 

    const buttonElement = screen.getByText(‘Click me’);

 

    fireEvent.click(buttonElement);

 

    expect(onClickMock).toHaveBeenCalledTimes(1);

  });

});



Generic Functions

Prompt 4:

Now give another, shorter paragraph about how chatGPT is great for taking lots of similar functions and making generic ones to reduce code copypasta. And provide a simple code example in typescript.

Response:

Say goodbye to code duplication and the dreaded “copy-paste” scenario with the help of ChatGPT! This powerful AI assistant can take your numerous similar functions and transform them into concise, generic ones, significantly reducing code repetition. By analyzing your code patterns and understanding the underlying logic, ChatGPT can generate elegant solutions that consolidate your repetitive functions into reusable, modular components. Let’s take a simple TypeScript example to illustrate the magic of ChatGPT:

// Original repetitive functions

function calculateAreaOfSquare(side: number): number {

  return side * side;

}

 

function calculateAreaOfRectangle(length: number, width: number): number {

  return length * width;

}

 

// Generated generic function by ChatGPT

function calculateArea(lengthOrSide: number, width?: number): number {

  if (width) {

    return lengthOrSide * width; // calculating rectangle area

  } else {

    return lengthOrSide * lengthOrSide; // calculating square area

  }

}

 

// Usage of the generic function

const squareArea = calculateArea(5);

const rectangleArea = calculateArea(3, 4);

 

In the above example, ChatGPT identified the similarities between the calculateAreaOfSquare and calculateAreaOfRectangle functions and generated a concise, generic function calculateArea. This single function can now handle both square and rectangle calculations, depending on the input parameters. By leveraging ChatGPT’s intelligent code transformations, you can eliminate redundant code and embrace a more efficient and maintainable codebase.

Regex

Prompt 4:

Regex can be a pain to remember. Give me another short paragraph (because software engineers are impatient!) about how ChatGPT is great as as regex helper.

Response:

Regex can be a tricky puzzle to solve, but fear not, for ChatGPT is here to be your regex hero! Bid farewell to the frustrations of struggling with complex pattern matching. With its AI-powered expertise, ChatGPT can assist you in crafting powerful regular expressions with ease.

Simply describe your desired pattern or provide sample input, and ChatGPT will swiftly generate regex code snippets tailored to your needs. No more wracking your brain to recall regex syntax or spending hours debugging your patterns. Let ChatGPT lend you a helping hand, unravel the secrets of regex, and unleash your mastery over text manipulation in a fraction of the time.

Prompt 5:

Give me a regex for a string that starts with any number between 1 and 10, and then a letter, and then any number of “W”s, and then a lowercase “r”, and ends with 2 periods.

Response:

Certainly! Here’s a regular expression that matches the pattern you described:

^[1-9]|10[A-Za-z]+W*r\.\.$

Let’s break it down:

  • ^ asserts the start of the string.
  • [1-9]|10 matches any number between 1 and 9 or the number 10.
  • [A-Za-z] matches any letter, both uppercase and lowercase.
  • + matches one or more occurrences of the preceding pattern, in this case, any letter.
  • W* matches zero or more occurrences of the letter “W”.
  • r matches the lowercase letter “r”.
  • \.\. matches exactly two periods (escaped with backslashes as \., since the dot is a special character in regex).
  • $ asserts the end of the string.

So, this regular expression will successfully match a string that starts with a number between 1 and 10, followed by a letter, followed by any number of “W”s, and ends with two periods.

Syntax Reminders

Prompt 6:

Now within this blog post, give me a short, super funny paragraph about how ChatGPT can help devs remember simple syntax, because even the most senior software engineers can sometimes forget the syntax of how to get the size of an array.

Response:

Who needs a memory like a supercomputer when you have ChatGPT by your side? Even the most seasoned software engineers can occasionally draw a blank on simple syntax, like how to get the size of an array. But fear not, for ChatGPT is here to rescue us from embarrassing brain farts! With its AI-powered intelligence, ChatGPT can instantly remind us that it’s as easy as a slice of pizza to access the array size using the Length property.

Prompt 7:

How do I get the size of an array in C# again?

Response:

To get the size of an array in C#, you can use the Length property. Here’s an example:

int[] myArray = new int[] { 1, 2, 3, 4, 5 };

int size = myArray.Length;

Console.WriteLine(“The size of the array is: ” + size);



Documentation For Code Snippets

Prompt 8:

Write a very short paragraph about how ChatGPT is the absolute best for writing up documentation for your code. And talk like a pirate.

Response:

Arr matey, if ye be in need of top-notch documentation for yer code, look no further than ChatGPT, the treasure chest of knowledge for developers! With its mighty AI capabilities, ChatGPT can help ye scribble down the finest documentation, savin’ ye from the scurvy of lengthy explanations and unclear code. Whether ye be lookin’ to document functions, APIs, or components, this trusty matey can provide ye with detailed explanations, examples, and even a touch o’ humor. So set sail on the seas of code clarity with ChatGPT, and let it be yer faithful first mate in documentin’ yer precious code treasures, arr!

Prompt 9:

Write me some documentation for the following react typescript code:

{the ListOfThings code that I pasted earlier in this blog}

Response:

## ListOfThings Component

The `ListOfThings` component is a React component written in TypeScript that renders a list of items based on the provided count. It also displays a title and a button that triggers a click event.

### Usage

To use the `ListOfThings` component, follow these steps:

1. Import the `ListOfThings` component into your React component file:

“`typescript
import React from “react”;
import { ListOfThings } from “./ListOfThings”;
“`

2. Define the props for the `ListOfThings` component:

“`typescript
interface ListOfThingsProps {
count: number;
title: string;
onClick(): void;
}
“`

The `count` prop represents the number of items to render, the `title` prop is the title to display, and the `onClick` prop is the event handler for the button click.

3. Render the `ListOfThings` component within your component’s JSX, passing the necessary props:

“`typescript
const App = () => {
const handleButtonClick = () => {
// Event handler logic goes here
};

return (

);
};
“`

### Props

The `ListOfThings` component accepts the following props:

– `count` (required): A number representing the count of items to render in the list.
– `title` (required): A string representing the title to display.
– `onClick` (required): A function that serves as the event handler for the button click.

### Example

Here’s an example usage of the `ListOfThings` component:

“`typescript
import React from “react”;
import { ListOfThings } from “./ListOfThings”;

const App = () => {
const handleButtonClick = () => {
console.log(“Button clicked!”);
};

return (

);
};
“`

This will render a component with the title “My List,” a button labeled “Click me,” and a list of numbers from 0 to 2.

That’s it! You can now use the `ListOfThings` component to render dynamic lists with a title and a button in your React TypeScript application.

Prompt 10:

For the ListOfThings component I just asked about, now write me some in-code documentation

Response:

import React from “react”;

 

/**

 * Renders a list of items based on the provided count, with a title and a button.

 */

export const ListOfThings = ({ count, title, onClick }: ListOfThingsProps) => {

  /**

   * Handles the click event of the button.

   */

  const handleButtonClick = () => {

    onClick();

  };

 

  return (

    <>

      <h1>{title}</h1>

      <button onClick={handleButtonClick}>Click me</button>

      <ul>

        {Array(count)

          .fill(1)

          .map((_, i) => (

            <li>{i}</li>

          ))}

      </ul>

    </>

  );

};

 

/**

 * Props for the ListOfThings component.

 */

interface ListOfThingsProps {

  /**

   * The number of items to render in the list.

   */

  count: number;

  

  /**

   * The title to display.

   */

  title: string;

  

  /**

   * Event handler for the button click.

   */

  onClick(): void;

}



Suggesting Libraries

Prompt 11:

This will be the last example in my blog post. Now share one short paragraph about how ChatGPT is great for suggesting various software libraries that might be good to accomplish a task. And talk like Severus Snape from Harry Potter.

Response:

Ah, the inquisitive minds seeking the optimal software libraries for their noble tasks. Fear not, for ChatGPT shall guide you through the mystical realm of programming possibilities. With its arcane wisdom, it can deftly suggest an assortment of enchanting software libraries that may aid you in your quest. Whether it be data manipulation, web development, or dark arts of machine learning, ChatGPT shall unravel the secrets of the coding universe, offering you a myriad of choices like potions in a cauldron. So, brace yourselves, dear developers, and let ChatGPT be your Snape-like guide, revealing the most potent libraries to accomplish your coding endeavors with utmost wizardry.

Prompt 12:

I’m trying to add cool, 3D animations to my angular app with minimal code. Suggest some libraries for me. And stop talking like severus snape. Just talk normal now.

Response:

Certainly! When it comes to adding cool, 3D animations to your Angular app with minimal code, there are a few libraries that you can consider:

    1. **Three.js**: It’s a powerful library that allows you to create stunning 3D graphics and animations in the browser. With a wide range of features and effects, Three.js provides a solid foundation for building immersive experiences.

    2. **Babylon.js**: This library offers a comprehensive set of tools and features for creating rich 3D scenes and interactive animations. It has excellent support for physics, materials, lighting, and more, making it a popular choice for game development and interactive web experiences.

    3. **ng-threejs**: Specifically designed for Angular, ng-threejs provides Angular components and directives that simplify the integration of Three.js into your Angular app. It offers a seamless way to leverage the power of Three.js while following Angular’s component-based architecture.

    4. **Anime.js**: While not specifically focused on 3D, Anime.js is a lightweight animation library that can be used to create stunning and expressive animations with minimal code. It supports a wide range of animation types and is known for its simplicity and ease of use.

These libraries can help you add captivating 3D animations to your Angular app with relative ease. Choose the one that aligns best with your specific needs and preferences, and let your creativity take flight!

Conclusion

Prompt 13:

Write a short conclusion for my blog post. Give it some pizazz!

Response:

In the mesmerizing world of software development, where time is a precious currency, embracing the power of ChatGPT is an absolute game-changer.

From writing unit tests with the grace of a swashbuckling coder to taming the wild seas of code duplication, ChatGPT emerges as a steadfast companion for the modern software engineer. With its AI prowess, it can unravel the secrets of regex, breathe life into documentation, unveil the hidden treasures of libraries, and even sprinkle your codebase with a touch of 3D magic. So, dear developers, set sail on this exhilarating voyage, and let ChatGPT be your trusted first mate. Embrace this AI marvel, and unlock a realm of productivity, creativity, and perhaps a touch of humor along the way. For the time to seize the ChatGPT-powered future of software development is now!

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Stop Wasting Your Time: Why Software Developers Must Embrace ChatGPT Now! appeared first on Black Slate.

]]>
255263
Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category http://www.blackslatesoftware.com/part-3-of-3-controlling-plotly-in-react-adding-custom-color-markers-by-category/ Mon, 05 Jun 2023 20:59:28 +0000 http://www.blackslatesoftware.com/?p=255252 The post Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category appeared first on Black Slate.

]]>

Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category

How to change the color of the markers based on arbitrary data categories

In this article, I will be showing you how to chane the color of the markers depending on arbitrary data categories that could be associated with your data points.

You should already have a basic understanding of:

  • Javascript
  • Typescript
  • Plotly in react
  • React and React hooks

Steps in creating these custom annotations in plotly

  • Create starter app from Part 1 of this series
  • Create file to hold data
  • Create data categories and types
  • Create supporting data object
  • Add dropdown to select data category
  • Group the data by data category

Create starter app from Part 1 of this series

Go to part 1 of this 3 part series and follow the following steps:

  • Create the app in codesandbox.io
  • Import dependencies and starting code

Create file to hold data

Right click on the “src” directory and create a new file called “data.ts”

Create data categories and types

In data.ts, paste the following code:

export const sizeValues = ["small", "medium", "large", "extra large"] as const;
type Size = typeof sizeValues[number];
export const weightValues = [
 "light",
 "average",
 "heavy",
 "super heavy"
] as const;
type Weight = typeof weightValues[number];
export const lengthValues = ["short", "medium", "long", "extra long"] as const;
type Length = typeof lengthValues[number];
export const colorMap: { [x in Size | Weight | Length]: string } = {
 small: "green",
 light: "green",
 short: "green",
 medium: "red",
 average: "red",
 large: "blue",
 heavy: "blue",
 long: "blue",
 "extra large": "black",
 "super heavy": "black",
 "extra long": "black"
};
export type Data = {
 Size: Size;
 Weight: Weight;
 Length: Length;
};

Explanation of code

  • `sizeValues, weightValues, and lengthValues`: These arrays of strings contain the possible values for size, weight, and length. The reason we’re declaring these arrays of strings is because we want to iterate over them later, which we wouldn’t be able to do if we simply declared them as types.
  • `colorMap`: For each data value, we want the marker to display a specific color. This object contains the colors we want associated with each possible data value.
  • `Data`: This is the shape of the data we will associate with each data point

Create supporting data object

In App.tsx, add the following import statement at the top:

import { sizeValues, weightValues, lengthValues, Data, colorMap } from "./data";
Then, on the line above your App component declaration, add the following code:
const supportingData = getArray().map(
 () =>
   ({
     Size: sizeValues[getRandomNumber(sizeValues.length)],
     Weight: weightValues[getRandomNumber(weightValues.length)],
     Length: lengthValues[getRandomNumber(lengthValues.length)]
   } as Data)
);
//export default function App() {

Explanation of code

  • supportingData represents your arbitrary data that is associated with your data points. Each data point will have a random size, weight, and length associated with it.

Add dropdown to select data category

Inside the App component in App.tsx, paste the following hook at the top:

 const [dataType, setDataType] = useState<keyof Data>("Size");
Then, inside the JSX, paste the following dropdown:

// <div className="App">
     <div>
       <select
         value={dataType}
         onChange={(e) => setDataType(e.target.value as keyof Data)}
       >
         <option value="Size">Size</option>
         <option value="Length">Length</option>
         <option value="Weight">Weight</option>
       </select>
     </div>
     // <Plot
This will create a dropdown at the top of your application:

Explanation of code:

  • dataType will be “Size”, “Length”, or “Weight” based on which option is selected in the dropdown

Group the data by data category

Just below the dataType hook, paste the following code:

 const groupedData: Partial<PlotData>[] = [];
 for (let i = 0; i < count; i += 1) {
   const dataTypeVal = supportingData[i][dataType];
   const existingGroup = groupedData.find((gd) => gd.name === dataTypeVal);
   if (existingGroup) {
     (existingGroup.x as number[]).push(startingNumbers[i]);
     (existingGroup.y as number[]).push(randomNumbers[i]);
   } else {
     groupedData.push({
       x: [startingNumbers[i]],
       y: [randomNumbers[i]],
       name: dataTypeVal,
       mode: "markers",
       marker: {
         color: colorMap[dataTypeVal]
       }
     });
   }
 }
Then, assign the groupedData variable to Plot.data
<Plot
       data={groupedData}

Explanation of code:

  • We’re grouping and separating the data based on which dataType value it has and identifying which group the data belongs to based by assigning that dataType value to the “name” property.
  • We also pick the marker color by passing the dataType value to the colorMap we created previously.
  • As we iterate through the data, if we see that we have already created a group based on the dataType value, we can add additional points to that data object instead of creating a new one.

You should then be able to see the marker’s change color when you change the dataType!

Conclusion

Not only can you modify a marker’s color, you can also modify its size, opacity, and even gradient. See for yourself by looking at the Plotly.PlotMarker type.

Here is a finished sandbox: https://codesandbox.io/s/plotly-color-by-multi-category-b7zgo2?file=/src/App.tsx

Parts

Part 1 of 3: Controlling Plotly in React – Control the Mode Bar
Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations
Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category appeared first on Black Slate.

]]>
255252
Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations http://www.blackslatesoftware.com/part-2-of-3-controlling-plotly-in-react-fully-custom-annotations/ Mon, 05 Jun 2023 20:59:04 +0000 http://www.blackslatesoftware.com/?p=255245 The post Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations appeared first on Black Slate.

]]>

Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations

Displaying custom annotations on scatterplot click

In this article, I will be showing you how to create fully custom annotations using MUI tooltips with plotly.

But doesn’t plotly js already allow custom annotations?

Yes, but they’re limited to only a small set of basic HTML elements. This tutorial will show you how to create your own annotations that consist of any JSX element.

You should already have a basic understanding of:

  • Javascript
  • Typescript
  • Plotly in react
  • React and React hooks

Steps in creating these custom annotations in plotly

  • Create starter app from Part 1 of this series
  • Install additional dependencies
  • Create AnnotationContent
  • Create AnnotationCircle
  • Create custom hook with initial state
  • Add Tooltip with Styling
  • Add hook and component into main app

Create starter app from Part 1 of this series

Go to part 1 of this 3 part series and follow the following steps:

    1. Create the app in codesandbox.io
    2. Import dependencies and starting code

Install additional dependencies

For this new application we’ll be using MUI and a very nifty library called “react-use”

Install the following additional dependencies:

  • @emotion/react
  • @emotion/styled
  • @mui/material
  • React-use

Your dependency list should then look like this:

Create AnnotationContent

We want our annotation to contain the following items:

  • Some text
  • Data from the point that we clicked on
  • An image
  • A button to press.

Steps

  • Right click on “src” and select Create File. Name the new file “AnnotationContent.tsx”
  • Paste in the following code:
import { Button, Paper, Stack, Typography } from "@mui/material";
import React from "react";
interface AnnotationContentProps {
 text: string;
 onCloseClick(): void;
}
export const AnnotationContent = ({
 text,
 onCloseClick
}: AnnotationContentProps) => (
 <Paper elevation={0}>
   <Stack>
     <Typography>A custom annotation!</Typography>
     <Typography>{text}</Typography>
     <div>
       <img alt="" src="https://placekitten.com/75/75" />
     </div>
     <Button onClick={onCloseClick}>Close</Button>
   </Stack>
 </Paper>
);

Create AnnotationCircle

We want to create a circle that will go around the point that we’ve clicked. This circle will be a simple element that anchors the tooltip.

Steps

  • Right click on the “src” directory and select Create File. Name the new file “AnnotationCircle.tsx”.
  • Paste in the following code:
import { Box } from "@mui/material";
import React from "react";
interface AnnotationCircleProps {
 top: number;
 left: number;
 diameter: number;
}
export const AnnotationCircle = React.forwardRef(
 ({ top, left, diameter, ...rest }: AnnotationCircleProps, ref) => (
   <Box
     ref={ref}
     style={{
       position: "absolute",
       top: `calc(${top}px - ${diameter / 2}px)`,
       left: `calc(${left}px - ${diameter / 2}px)`,
       display: "inline-block",
       backgroundColor: "transparent",
       border: `2px solid black`,
       borderRadius: "50%",
       content: '""',
       width: diameter,
       height: diameter
     }}
     {...rest}
   />
 )
);

Explanation of code

  • Because we’re creating a custom component that will be the anchor for a tooltip, we need to pass a reference back up to the soon-to-be parent tooltip. You can read more about this pattern here: https://mui.com/material-ui/react-tooltip/#custom-child-element
  • This element needs to be a transparent circle that is centered on the point that is clicked.

Create custom hook with initial state

Next we’ll create the custom hook that uses AnnotationCircle and AnnotationContent within a MUI Tooltip.

Steps

  • Right click on the “src” directory and select Create File. Name this new file “useCustomAnnotation.ts”
  • Add the following imports to the top of the file:
import { styled, Tooltip, tooltipClasses, TooltipProps } from "@mui/material";
import { PlotMouseEvent } from "plotly.js";
import { useEffect, useState } from "react";
import { useWindowSize } from "react-use";
import { AnnotationCircle } from "./AnnotationCircle";
import { AnnotationContent } from "./AnnotationContent";
  • Paste the following code underneath:
export const useCustomAnnotation = () => {
 const { width, height } = useWindowSize();
 const [annotationData, setAnnotationData] = useState<
   PlotMouseEvent | undefined
 >();
 // close the annotation if the window resizes
 useEffect(() => {
   setAnnotationData(undefined);
 }, [width, height]);
}

Explanation of Code

  • We’re adding all the imports we’ll eventually be using, so some will show as “unused” until the next step.
  • The “annotationData” variable is what will contain our mouse event, which will hold the (x,y) coordinates of point we click on.
  • We need to keep track of the window height and size because we need to close the annotation if the user changes the window size. This is because the coordinates of the point would change if the screen size changed but the annotation would stay fixed at the previous, no outdated, coordinates. So for ease we’ll simply close the clear the annotation.

Add Tooltip with Styling

Finally we’ll add the actual MUI Tooltip to the hook and link the AnnotationCircle and AnnotationContent inside of it. We will apply some styling to the Tooltip so it matches the content Paper element.

Steps

  • In App.tsx after the imports but before the hook, paste the following code:
const circleDiameter = 20;
const StyledTooltip = styled(({ className, ...props }: TooltipProps) => (
 <Tooltip {...props} classes={{ popper: className }} />
))(() => ({
 [`& .${tooltipClasses.tooltip}`]: {
   backgroundColor: "white",
   border: `1px solid black`
 },
 [`& .${tooltipClasses.arrow}`]: {
   color: "black"
 }
}));
  • Inside the hook after the useEffect you wrote in the previous step, paste the following code:
const coordString = `(${annotationData?.points[0].x},${annotationData?.points[0].y})`;
 const CustomAnnotation = () => {
   if (!annotationData) return <></>;
   return (
     <StyledTooltip
       open
       arrow
       PopperProps={{
         disablePortal: true,
         placement: "auto",
         modifiers: [
           {
             name: "offset",
             options: {
               offset: [0, -10]
             }
           }
         ]
       }}
       title={
         <AnnotationContent
           text={coordString}
           onCloseClick={() => setAnnotationData(undefined)}
         />
       }
     >
       <AnnotationCircle
         left={annotationData.event.x}
         top={annotationData.event.y}
         diameter={circleDiameter}
       />
     </StyledTooltip>
   );
 };
 return { CustomAnnotation, setAnnotationData };

Explanation of Code

  • The annotation should only return the tooltip if annotation data exists
  • MUI’s tooltip is built on top of popper.js, so we modify the PopperProps to ensure the tooltip is close to the AnnotationCircle and that the annotation is automatically placed in a spot where it won’t render off the screen
  • annotationData contains the mouse event of where the user clicked and we pass that information to AnnotationCircle so it knows where to render the circle.

Add hook and component into main app

Now that we’ve made our custom hook we can add it into our main app component!

Steps

  • Paste the following import into App.tsx
import { useCustomAnnotation } from "./useCustomAnnotation";
  • Inside your app component, use the hook and extract our the return elements:
const { CustomAnnotation, setAnnotationData } = useCustomAnnotation();
  • In your Plot component, set the onClick prop to call setAnnotationData
onClick={setAnnotationData}
  • Finally, add the CustomAnnotation component next to the Plot element
     <CustomAnnotation />

Conclusion

And that’s how you add a “custom annotation” to a Plotly app!

Here is a finished sandbox: https://codesandbox.io/s/plotly-custom-annotations-0dexmi?file=/src/App.tsx

In the next article, I’ll be showing you how to give each marker a custom color based on its category.

Parts

Part 1 of 3: Controlling Plotly in React – Control the Mode Bar
Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations
Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations appeared first on Black Slate.

]]>
255245
Part 1 of 3: Controlling Plotly in React – Control the Modebar http://www.blackslatesoftware.com/part-1-of-3-controlling-plotly-in-react-control-the-modebar/ Mon, 05 Jun 2023 20:58:37 +0000 http://www.blackslatesoftware.com/?p=255230 The post Part 1 of 3: Controlling Plotly in React – Control the Modebar appeared first on Black Slate.

]]>

Part 1 of 3: Controlling Plotly in React – Control the Modebar

How to take control of Plotly’s modebar buttons with a single, straightforward React hook.

Are you gearing up to write a new data app?
Does your business contain critically huge amounts of data that are interpreted best when visualized?
Do you want to make your data points interactive for the user without writing a ton of code?

Consider using plotly.

Plotly?

Ploty is a popular, feature-rich solution for writing data visualization apps with relatively small amounts of code. Scatter plot charts, bar charts, 3D charts, maps, and much more are all possible with plotly.

In this article…

I’ll be showing you how to control plotly’s modebar buttons with a simple React hook. We’ll control them by selecting their base anchor elements in the HTML, then creating buttons that trigger a click for those elements

You should already have a basic understanding of:

  • Javascript
  • Typescript (this app will be in Typescript!)
  • Plotly in react
  • React and React hooks

Steps in creating this hook to control plotly’s modebar:

  • Create the app in codesandbox.io
  • Import dependencies and starting code
  • Create new file for hook
  • Extract types
  • Write the hook logic
  • Use hook in main app
  • Hide existing plotly modebar

Create the app in codesandbox.io

Begin by going to codesandbox.io. Click on the “Create” button at the top, then select “React Typescript”

Import dependencies and starting code:

In the “Dependencies” section on the left, import the following dependencies:

  • @types/react-plotly.js
  • Plotly.js
  • React-plotly.js

Your dependency list should look like this (the other dependencies should already be installed):

Go to App.tsx. Delete its contents and paste in the following starter code:
import Plot from "react-plotly.js";
import "./styles.css";
const count = 50;
const getArray = () => Array(count).fill(1);
const getRandomNumber = (max: number) => Math.floor(Math.random() * max);
const startingNumbers = getArray().map((_, i) => i);
const randomNumbers = getArray().map(() => getRandomNumber(count));
export default function App() {
 return (
   <div className="App">
     <Plot
       data={[
         {
           x: startingNumbers,
           y: randomNumbers,
           mode: "markers"
         }
       ]}
       layout={{
         title: "Plotly App",
         xaxis: { range: [-5, count] },
         yaxis: { range: [-5, count] },
         dragmode: "lasso",
         uirevision: 1
       }}
     />
   </div>
 );
}
You should then see a basic Plot generated in the preview window!

Create new file for hook

Right click on the “src” directory on the left and click on Create File. Call the file “usePlotlyModebar.ts”

Extract types

At the top of the usePlotlyModebar.ts file, paste the following code

// these strings are the datatitles for plotly's modebar <a> elements
export const modebarActions = [
 "Pan",
 "Box Select",
 "Lasso Select",
 "Download plot as a png",
 "Autoscale",
 "Zoom out",
 "Zoom in"
] as const;
type ModebarAction = typeof modebarActions[number];

Wait! Where did these strings (“Pan”, “Box Select”, etc…) come from?

If you inspect the modebar element and expand down to the bottommost elements, you’ll see that the modebar buttons are anchor elements with unique data-titles. These strings are the data-titles for these anchor elements and we will use them to select these anchor elements and eventually click them.

Wait (again)! Why make a const and a type and not just a type?

You’ll notice I first make an array of strings (const modeBarActions) and then I make a type based on the strings in the array (type ModebarAction). So why didn’t I just make the type? It’s because I plan on iterating over each action string and you can’t do that with a type.

Write the hook logic

Underneath the code written in the previous step, write this hook:


export const usePlotlyModebar = () => {
 const triggerModebarButton = (action: ModebarAction) => {
   const anchorElement = document.querySelector<HTMLAnchorElement>(
     `[data-title="${action}"]`
   );
   anchorElement?.click();
 };
 return { triggerModebarButton };
};

Explanation of code:

The hook contains a single method (triggerModebarButton) that receives a ModebarAction string. That string is used to select the unique modebar button based on the anchor’s data-title, which we then click.

Use hook in main app

In App.tsx, in the App component, import modebarActions and usePlotlyModebar then add the usePlotlyModebar hook in the first line of the component.

Next, in the return JSX before the plot, add the code that will return a button for each modebarAction:

import { modebarActions, usePlotlyModebar } from "./usePlotlyModebar";
// other code…
export default function App() {
 const { triggerModebarButton } = usePlotlyModebar();
 return (
   <div className="App">
     {modebarActions.map((action) => (
       <button
         style={{ marginRight: "2px" }}
         onClick={() => triggerModebarButton(action)}
       >
         {action}
       </button>
     ))}
{/* other code… */}

Hide existing plotly modebar

Finally, go to styles.css and add the following code. It will hide the existing plotly modebar so you won’t see both. And that’s it!

.modebar-container * {
 visibility: hidden;
}

Conclusion

Here’s a link to a finished codesandbox: https://codesandbox.io/s/plotly-modebar-buttons-exc9qh?file=/src/App.tsx

In the next article for this 3 part series on Controlling plotly in React, I’ll be showing you how to render fully custom annotations.

Parts

Part 1 of 3: Controlling Plotly in React – Control the Modebar
Part 2 of 3: Controlling Plotly in React – Fully Custom Annotations
Part 3 of 3: Controlling Plotly in React – Adding Custom Color Markers by Category

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Part 1 of 3: Controlling Plotly in React – Control the Modebar appeared first on Black Slate.

]]>
255230
Part 3 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) http://www.blackslatesoftware.com/part-3-full-stack-todos-application-with-nextjs-prisma-using-sql-server-and-redux-toolkit-rtk/ Fri, 02 Jun 2023 22:07:08 +0000 http://www.blackslatesoftware.com/?p=255207 The post Part 3 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>

Part 3 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK)

The purpose of this document series is to describe the steps that are necessary to create a “Todos” application using NextJS, Prisma, and Redux Toolkit (RTK).

NextJS is an exciting development tool to create web applications because they allow your Server Code and Client Code to be in the same repository. It is essentially like having a NodeJS server application and REACT Application in the same repository.

TLDR, show me the code: https://github.com/woodman231/nextjs-prisma-todos-rest

TOPIC: Configure the REST Client Features

Now we are confident in our REST Server. Let’s now focus on getting our client to connect to the server. Since NextJS is a fullstack framework we will be adding the client files to the same project.

Install Redux Toolkit (RTK)

One of the most popular state management tools for react is the Redux Toolkit (RTK). We will be creating a store for our Todos application and utilizing their createApi and enhanceEndpoints features to connect to the server responses. If you are following along from previously, be sure to do a Ctrl+C to stop the “npm run dev” that you might have done earlier.

Execute the following command to install these tools:

npm install @reduxjs/toolkit react-redux --save
To create the store go in to the “next-app/features/common” folder that we were in earlier. Create a new folder called: “store”, then create a new file called: “index.ts”. Give it the following code:
import { configureStore, ConfigureStoreOptions } from '@reduxjs/toolkit'
import { TypedUseSelectorHook, useDispatch, useSelector } from 'react-redux'

export const createStore = (
    options?: ConfigureStoreOptions['preloadedState'] | undefined
) =>
    configureStore({
        reducer: {
        },
    })

export const store = createStore()

export type AppDispatch = typeof store.dispatch
export const useAppDispatch: () => AppDispatch = useDispatch
export type RootState = ReturnType<typeof store.getState>
export const useTypedSelector: TypedUseSelectorHook<RootState> = useSelector
We will be adding to this later, but for now this is at least a blank slate to get an RTK store created from which we will pull the properties of our application state.

Create the base API

First, we will create a base API to use with our application. After the base API is created, we will then add additional endpoints to the api, and make them part of the store.

In the “next-app/features/common/store” folder add a new file called: “api.ts” and give it the following code:

import { createApi, fetchBaseQuery, retry } from '@reduxjs/toolkit/query/react'

// Create our baseQuery instance
const baseQuery = fetchBaseQuery({
  baseUrl: '/api/',
})

const baseQueryWithRetry = retry(baseQuery, { maxRetries: 6 })

/**
 * Create a base API to inject endpoints into elsewhere.
 * Components using this API should import from the injected site,
 * in order to get the appropriate types,
 * and to ensure that the file injecting the endpoints is loaded 
 */
export const api = createApi({
  /**
   * `reducerPath` is optional and will not be required by most users.
   * This is useful if you have multiple API definitions,
   * e.g. where each has a different domain, with no interaction between endpoints.
   * Otherwise, a single API definition should be used in order to support tag invalidation,
   * among other features
   */
  reducerPath: 'applicationApi',
  /**
   * A bare bones base query would just be `baseQuery: fetchBaseQuery({ baseUrl: '/' })`
   */
  baseQuery: baseQueryWithRetry,
  /**
   * Tag types must be defined in the original API definition
   * for any tags that would be provided by injected endpoints
   */
  tagTypes: ['Todos'],
  /**
   * This api has endpoints injected in adjacent files,
   * which is why no endpoints are shown below.
   * If you want all endpoints defined in the same file, they could be included here instead
   */
  endpoints: () => ({}),
})
As you can see here we are telling it to use /api/ as the base URL. Which is one of the neatest features of developing with NextJS is that you don’t need to store different environment configurations for different domains for different API environments since the API routes are all relative to your application. We also defined no endpoints in this file because we will be adding them to other files later.

In the “next-app/features/common/store” folder add a new file called: “provider.tsx” and give it the following code:

"use client";

import React from "react";
import { store } from "./index";
import { Provider } from "react-redux";

export function Providers({ children }: { children: React.ReactNode }) {
  return <Provider store={store}>{children}</Provider>;
}
This is a small bit of code but will become very important glue to make sure that our store is available throughout the application. I also want to say that at this point I am basically turning this application in to a nearly fully client side rendered application when it comes to the UI. In this demonstration no pages will be using server side rendering. The redux toolkit is not available server side as far as I know.

Create the TODOS API

In the “features/todos” folder create a new folder called: “store”. To the “store” folder create a new file called: “todos.ts”. Give it the following code:

import { Prisma } from '@prisma/client'
import { api } from '@/features/common/store/api'
import { TodoDetailsResponseModel } from '@/features/todos/restResponseModels/todoDetailsResponseModel'
import { TodosListResponseModel } from '@/features/todos/restResponseModels/todosListResponseModel'

type TodoUpdateInputWithId = Prisma.TodoUpdateInput & {
    id: number
}

export const todosApi = api.injectEndpoints({
    endpoints: (builder) => ({
        getTodos: builder.query<TodosListResponseModel, void>({
            query: () => 'todos',
        }),
        getTodo: builder.query<TodoDetailsResponseModel, number>({
            query: (id) => `todos/${id}`,
        }),
        createTodo: builder.mutation<TodoDetailsResponseModel, Prisma.TodoCreateInput>({
            query: (body) => ({
                url: 'todos',
                method: 'POST',
                body
            })
        }),
        updateTodo: builder.mutation<TodoDetailsResponseModel, TodoUpdateInputWithId>({
            query: (data) => {
                const { id } = data;
                const dataToPut: Prisma.TodoUpdateInput = {
                    title: data.title,
                    dueDate: data.dueDate,
                    done: data.done
                }
                return {
                    url: `todos/${id}`,
                    method: 'PUT',
                    body: dataToPut
                }
            }
        }),
        deleteTodo: builder.mutation<undefined, Number>({
            query: (id) => {
                return {
                    url: `todos/${id}`,
                    method: 'DELETE'
                }
            }
        })
    }),
})

export const {
    useGetTodosQuery,
    useGetTodoQuery,
    useCreateTodoMutation,
    useUpdateTodoMutation,
    useDeleteTodoMutation,
} = todosApi

export const {
    endpoints: { 
        getTodos,
        getTodo,
        createTodo,
        updateTodo,
        deleteTodo
    }
} = todosApi
As you can see here we are importing the API from the store in the common feature that we created earlier. We are now adding endpoints to that api. We have defined two queries and 3 mutations. The two queries are to get the list of todos and to get the todo details. The mutations are to create, update, and delete a todo. As you can tell by the URLs they match the URLs that we created within the “api” folder of the “app” folder. The types also match the types for the RequestBodies and ResponseBodies that are expected to be provided to the API.

Something interesting here is that I had to add an id parameter to the update request. The reason for that is that the query parameter only takes one type, and that type must have the id in it to specify the URL that we want to PUT. But I was also able to remove that ID before performing the PUT by manually specifying the fields of the request body.

For more information about creating queries and mutation see these links:
https://redux-toolkit.js.org/rtk-query/usage/queries
https://redux-toolkit.js.org/rtk-query/usage/mutations

Update the store to use the API Reducer

Return to “features/common/store/index.ts” Give it the following code:

import { configureStore, ConfigureStoreOptions, getDefaultMiddleware } from '@reduxjs/toolkit'
import { TypedUseSelectorHook, useDispatch, useSelector } from 'react-redux'
import { api } from './api';

export const createStore = (
    options?: ConfigureStoreOptions['preloadedState'] | undefined
) =>
    configureStore({
        reducer: {
            [api.reducerPath]: api.reducer
        },
        middleware: (getDefaultMiddleware) =>
            getDefaultMiddleware().concat(api.middleware),
        ...options,
    })

export const store = createStore()

export type AppDispatch = typeof store.dispatch
export const useAppDispatch: () => AppDispatch = useDispatch
export type RootState = ReturnType<typeof store.getState>
export const useTypedSelector: TypedUseSelectorHook<RootState> = useSelector
With all of this in place we are now ready to use our API with our UI components.

Update the Global Layout to use the Global RTK Store

In the “app” folder there is a “layout.tsx” file. Update it to use this code:

import './globals.css'
import { Inter } from 'next/font/google'
import { Providers } from '@/features/common/store/provider'

const inter = Inter({ subsets: ['latin'] })

export const metadata = {
  title: 'Create Next App',
  description: 'Generated by create next app',
}

export default function RootLayout({
  children,
}: {
  children: React.ReactNode
}) {
  return (
    <html lang="en">
      <body className={inter.className}>
        <Providers>{children}</Providers>
      </body>
    </html>
  )
}
Having this provider is what will allow us to do those useStore functions throughout the application. And in our case since we are going to be using the queries and mutations that we defined earlier, this provider is what gives us access to be able to use that code.

Remove boilerplate styles

While we are at it. Let’s remove the customized css stuff from the boilerplate. Go in to global.css and make it such it only has this code:

@tailwind base;
@tailwind components;
@tailwind utilities;
Furthermore, we will be defining our components in a features folder so to the tailwind.config.js file update it to have this code:
/** @type {import('tailwindcss').Config} */
module.exports = {
  content: [
    './pages/**/*.{js,ts,jsx,tsx,mdx}',
    './components/**/*.{js,ts,jsx,tsx,mdx}',
    './app/**/*.{js,ts,jsx,tsx,mdx}',
    './features/**/*.{js,ts,jsx,tsx,mdx}'
  ],
  plugins: [],
}
There was some boilerplate code regarding the theme that was removed from that file as well.

Create the TODOS list page

At this point we are ready to create the TODOS list page. Let’s begin by preparing the components that will be on the list page. The components that will be on the list page include a create button, the list itself, and the items on the list.

Create Button Component

To the “features/todos” folder create a new folder called: “components”. To the components folder create a new folder called: “listOfTodosPageComponents”. To the “listOfTodosPageComponents” folder create a new file called: “createNewTodoButton.tsx”

Give it the following code:

import React from 'react'
import Link from 'next/link'

function CreateNewTodoButton() {
    return (
        <Link href="/todos/create" className='block p-2 m-2 text-white bg-green-500 rounded'>Create New Todo</Link>
    )
}

export default CreateNewTodoButton
For more information about that Link component from next see this page https://nextjs.org/docs/app/building-your-application/routing/linking-and-navigating

TODO List Item Component

To the “features/todos/components/listOfTodosPageComponents” folder create a new file called: “todoListItem.tsx”. Give it the following code:

import React from 'react'
import Link from 'next/link'
import { TodoPayload } from '@/features/todos/prismaPayloads/todoPayload'

interface TodoListItemProps {
    todo: TodoPayload,
    deleteHandler: (id: number) => void
}

function TodoListItem({todo, deleteHandler}: TodoListItemProps) {
    return (
        <div className='flex-1 m-1 p-2 bg-teal-400'>
            <div className='grid p-4 gap-2 grid-rows-2 bg-white rounded'>
                <div>
                    <div className='text-xl font-bold'>{todo.title}</div>
                    <div>Due: {todo.dueDate.toString().split("T")[0]}</div>
                    <div>Done: {todo.done.valueOf().toString()}</div>
                </div>
                <div>
                    <div className='grid grid-cols-2 gap-2 bg-white'>
                        <Link href={`/todos/edit/${todo.id}`} className='block p-2 m-2 text-white bg-blue-500 rounded'>Edit</Link>
                        <button className='p-2 m-2 text-white bg-red-500 rounded' onClick={() => deleteHandler(todo.id)}>Delete</button>
                    </div>
                </div>
            </div>
        </div>
    )
}

export default TodoListItem
Take note that we were able to use the Payload that we used on the server side project since they are the same model. This really provides us an extra convenience of not having to define a new model for the client side project.

TODOS List Component

In the “features/todo/components/listOfTodosPageComponents” folder create a new file called: “listOfTods.tsx”. Give it the following code:

import React from 'react'
import TodoListItem from './todoListItem'
import { TodoPayload } from '@/features/todos/prismaPayloads/todoPayload'

interface ListOfTodosProps {
    todos: TodoPayload[],
    deleteHandler: (id: number) => void
}

function ListOfTodos(props: ListOfTodosProps) {
    return (
        <div className='flex flex-col p-6 bg-gray-400'>
            {props.todos.map((todo) => (
                <TodoListItem key={todo.id} todo={todo} deleteHandler={props.deleteHandler} />
            ))}
        </div>
    )
}

export default ListOfTodos
Take note that once again we were able to use the payload used earlier.

TODOS List Page Component

To the “features/todo” folder create a new folder called: “pages”. In the “pages” folder create a new file called: “listOfTodosPage.tsx” and give it the following code:

import React from 'react'
import { useGetTodosQuery, useDeleteTodoMutation } from '@/features/todos/store/todos'
import CreateNewTodoButton from '@/features/todos/components/listOfTodosPageComponents/createNewTodoButton'
import ListOfTodosComponent from '@/features/todos/components/listOfTodosPageComponents/listOfTodos'

function ListOfTodosPage() {
    const { isFetching, isError, error, isSuccess, data, refetch } = useGetTodosQuery(undefined, {
        refetchOnMountOrArgChange: true
    });

    const [deleteTodo, {isSuccess: deleteSuccess}] = useDeleteTodoMutation();

    const deleteHandler = (id: number) => {
        const confirmed = confirm("Are you sure you want to delete this todo?");
        if (confirmed) {
            deleteTodo(id);
        }        
    }

    React.useEffect(() => {
        if (deleteSuccess) {
            refetch();
        }
    }, [deleteSuccess, refetch])

    return (
        <div>
            <h1 className="text-3xl">List of Todos</h1>
            {isFetching && <div>Loading...</div>}
            {isError && <div>{error.toString()}</div>}
            {isSuccess && data && data.data && (
                <>
                    <CreateNewTodoButton />
                    <ListOfTodosComponent todos={data.data} deleteHandler={deleteHandler} />
                </>
            )}
        </div>
    )
}

export default ListOfTodosPage
This is where using the power of the RTK create api starts to show. Notice that from the todos store we imported the useGetTodosQuery, and the useDeleteTodoMutation. The properties and methods available from the useGetTodosQuery are pretty powerful. In some instances there are people that try and manually handle and set those isFetching, isError, isSuccess, and etc properties. Using the createApi with RTK makes those properties for us. When we first call that useGetTodosQuery it does indeed beging the fetching process at that point. We also have a mutation, the useDeleteTodoMutation which only executes when we ask it to. You will also notice that we included a refetchOnMountOrArgChange property to true in the useGetTodosQuery. This is important otherwise it will just have stale data the entire time we are in the application. Not having that on might be useful for enums or things that don’t change that often. Or you can also specify a polling time to continually refresh instead. For more information on those features see this page https://redux-toolkit.js.org/rtk-query/usage/queries.

Add the TODOS List Page Component to the Route

To the “app” folder create a new folder called; “todos”. To the “todos” folder create a new file called: “page.tsx”. Give it the following code:

"use client";
import ListOfTodosPage from '@/features/todos/pages/listOfTodosPage'

export default ListOfTodosPage
This is what will officially make the /todos/ page render in our app.

Update the “app/page.tsx” file with the following code:

import Link from "next/link"

export default function Home() {
  return (
    <main className="p-2">
      <h1 className="text-3xl">Welcome</h1>
      <p>
        <Link className="underline text-blue-600 hover:text-blue-800 visited:text-purple-600" href="/todos">Manage Todos</Link>
      </p>
    </main>
  )
}

Test the TODOS List Page

At this point you should be able to start up your application (either npm run dev, or a combination of npm run build with npm run start). And then browse to the TODOS List Page. Depending on what kind of testing you ended up with during the POSTMAN testing that was done earlier. You may see a screen similar to this:

None of the buttons work yet, but at least you should see the list. You can even add and delete more through POSTMAN as described earlier and refresh this screen to see the results.

Create the Create TODO Page

Let’s begin by creating some components that will be common for the create and edit forms.

Create the ID Form Control

To the “features/todos/components” folder create a new folder called: “createOrEditFormControls”. Create a new file called: “idFormControl.tsx”. Give it the following code:

import React from 'react'

interface IdFormControlProps {
    defaultValue: number
}

function IdFormControl({ defaultValue }: IdFormControlProps) {
    return (
        <input type="hidden" name="id" value={defaultValue} />
    )
}

export default IdFormControl

Create the Title Form Control

To the “features/todos/components/createOrEditFormControls” folder create a new file called: “titleFormControl.tsx”. Give it the following code:

import React from 'react'

interface TitleFormControlProps {
    defaultValue: string
}

function TitleFormControl({ defaultValue }: TitleFormControlProps) {

    const [title, setTitle] = React.useState(defaultValue);

    return (
        <>
            <label
                className="p-2 m-2 text-white bg-green-500 rounded"
                htmlFor="title">
                Title
            </label>
            <input
                className="p-2 m-2 text-black border border-gray-500 rounded"
                type="text"
                name="title"
                id="title"
                value={title}
                onChange={(e) => setTitle(e.currentTarget.value)}
            />
        </>
    )
}

export default TitleFormControl

Create the Due Date Form Control

To the “features/todos/components/createOrEditFormControls” folder create a new file called: “dueDateFormControl.tsx”. Give it the following code:

import React from 'react'

interface DueDateFormControlProps {
    defaultValue: string
}

function DueDateFormControl({ defaultValue }: DueDateFormControlProps) {

    const [dueDate, setDueDate] = React.useState(defaultValue);

    return (
        <>
            <label
                className="p-2 m-2 text-white bg-green-500 rounded"
                htmlFor="dueDate">
                Due Date
            </label>
            <input
                className="p-2 m-2 text-black border border-gray-500 rounded"
                type="date"
                name="dueDate"
                id="dueDate"
                value={dueDate}
                onChange={(e) => setDueDate(e.currentTarget.value)}
            />
        </>
    )
}

export default DueDateFormControl

Create the Done Form Control

To the “features/todos/components/createOrEditFormControls” folder create a new file called: “doneFormControl.tsx”. Give it the following code:

import React from 'react'

interface DoneFormControlProps {
    defaultValue: boolean
}

function DoneFormControl({ defaultValue }: DoneFormControlProps) {

    const [done, setDone] = React.useState(defaultValue);

    return (
        <>
            <label
                className="p-2 m-2 text-white bg-green-500 rounded"
                htmlFor="done">
                Done
            </label>
            <input
                className="p-2 m-2 text-black border border-gray-500 rounded"
                type="checkbox"
                name="done"
                id="done"
                checked={done}
                onChange={(e) => setDone(e.currentTarget.checked)}
            />
        </>
    )
}

export default DoneFormControl

Create the Submit Button Form Control

To the “features/todos/components/createOrEditFormControls” folder create a new file called: “submitButton.tsx”. Give it the following code:

import React from 'react'

function SubmitButton() {
  return (
    <input type="submit" value="Submit" className="p-2 m-2 text-white bg-green-500 rounded" />
  )
}

export default SubmitButton

Create the Create Form Component

To the “features/todos/components” folder create a new file called; “createTodoForm.tsx”. Give it the following code:

import React from 'react'
import TitleFormControl from './createOrEditFormControls/titleFormControl'
import DueDateFormControl from './createOrEditFormControls/dueDateFormControl'
import DoneFormControl from './createOrEditFormControls/doneFormControl'
import SubmitButton from './createOrEditFormControls/submitButton'

interface CreateTodoFormProps {
    handleSubmit: (e: React.FormEvent<HTMLFormElement>) => void
}

function CreateTodoForm({ handleSubmit }: CreateTodoFormProps) {
    return (
        <form className="flex flex-col" onSubmit={handleSubmit}>
            <TitleFormControl defaultValue="" />
            <DueDateFormControl defaultValue="" />
            <DoneFormControl defaultValue={false} />
            <SubmitButton />
        </form>
    )
}

export default CreateTodoForm

Create the Create Form Page Component

To the “features/todos/pages” folder create a new file called: “createTodoPage.tsx”. Give it the following code:

"use client";
import React from 'react'
import CreateTodoForm from '@/features/todos/components/createTodoForm'
import { useCreateTodoMutation } from '@/features/todos/store/todos'
import { useRouter } from 'next/navigation';

function CreateTodoPage() {

    const router = useRouter();
    const [createTodo, { isError, isSuccess }] = useCreateTodoMutation();

    const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => {
        e.preventDefault();
        const formData = new FormData(e.currentTarget);
        const title = formData.get('title') as string;
        const dueDate = formData.get('dueDate') as string;
        const done = formData.get('done') as string;
        createTodo({
            title,
            dueDate,
            done: done === 'on'
        })
    }

    React.useEffect(() => {
        if (isError) {
            alert('Error creating todo');
        }
        if (isSuccess) {
            router.push("/todos");
        }
    }, [isError, isSuccess, router])

    return (
        <div>
            <h1 className="text-3xl">Create Todo</h1>
            <CreateTodoForm handleSubmit={handleSubmit} />
        </div>
    )
}

export default CreateTodoPage
This is where the rubber hits the road again using the useCreateTodoMutation from the store that we created earlier. We intercept the submit and prevent default (which goes to a different page, which in a SPA we do not want). Then we gather the information from the form and send the properties along to the createTodo method. If things work out, we re-route them to the todos list page. If things don’t, we send an alert specifying why.

Add the Create TODO Page to the Route

To the “app/todos” folder create a new folder called: “create”. Create a new file called: “page.tsx”. Give it the following code:

"use client";
import CreateTodoPage from "@/features/todos/pages/createTodoPage";

export default CreateTodoPage;

Create the Update TODO Page

With the components that we made earlier this should be a little simpler.

Create the Edit Form Component

To the “features/todos/components” folder create a new file called: “editTodoForm.tsx”. Give it the following code:

import React from 'react'
import IdFormControl from './createOrEditFormControls/idFormControl'
import TitleFormControl from './createOrEditFormControls/titleFormControl'
import DueDateFormControl from './createOrEditFormControls/dueDateFormControl'
import DoneFormControl from './createOrEditFormControls/doneFormControl'
import SubmitButton from './createOrEditFormControls/submitButton'

interface EditTodoFormProps {
    defaultValues: {
        id: number,
        title: string,
        dueDate: string,
        done: boolean
    },
    handleSubmit: (e: React.FormEvent<HTMLFormElement>) => void
}

function EditTodoForm({ defaultValues, handleSubmit }: EditTodoFormProps) {
    return (
        <form className="flex flex-col" onSubmit={handleSubmit}>
            <IdFormControl defaultValue={defaultValues.id} />
            <TitleFormControl defaultValue={defaultValues.title} />
            <DueDateFormControl defaultValue={defaultValues.dueDate} />
            <DoneFormControl defaultValue={defaultValues.done} />
            <SubmitButton />
        </form>
    )
}

export default EditTodoForm

Create the Edit Form Page Component

To the “features/todos/pages” folder create a new file called: “updateTodoPage.tsx”. Give it the following code:

import React from 'react'
import EditTodoForm from '../components/editTodoForm';
import { useUpdateTodoMutation, useGetTodoQuery } from '@/features/todos/store/todos'
import { useRouter } from 'next/navigation';
import { IDParams } from '@/features/common/params/idParams';
import { idParamaterValidator } from '@/features/common/paramValidators/idParamaterValidator';

function UpdateTodoPage({ params }: IDParams) {

    const validationResult = idParamaterValidator({ params });
    if (!validationResult.isValid) {
        throw new Error("Invalid id parameter");
    }

    const router = useRouter();
    const [updateTodo, { isError, isSuccess }] = useUpdateTodoMutation();
    const { data, isFetching } = useGetTodoQuery(Number(params.id), {
        refetchOnMountOrArgChange: true
    });

    const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => {
        e.preventDefault();
        const formData = new FormData(e.currentTarget);
        const id = formData.get('id') as string;
        const title = formData.get('title') as string;
        const dueDate = formData.get('dueDate') as string;
        const done = formData.get('done') as string;
        updateTodo({
            id: Number(id),
            title,
            dueDate,
            done: done === 'on'
        })
    }

    React.useEffect(() => {
        if (isError) {
            alert('Error updating todo');
        }
        if (isSuccess) {
            router.push("/todos");
        }
    }, [isError, isSuccess, router])

    return (
        <div>
            <h1 className="text-3xl">Update Todo</h1>
            {
                isFetching ? <p>Loading...</p> : (
                    data && data.data &&
                    <EditTodoForm
                        defaultValues={{
                            id: Number(params.id),
                            title: data.data.title,
                            dueDate: new Date(data.data.dueDate.toString()).toISOString().split('T')[0],
                            done: data.data.done
                        }}
                        handleSubmit={handleSubmit} />
                )
            }
        </div>
    )
}

export default UpdateTodoPage
What was nice here is that I was able to use the IDParams that we used on the server side. I was also able to use the validateIdParams that we used on the server side, but on this client side component. Throwing that error displays things within the error boundary in Nextjs. See this page for more details on that https://nextjs.org/docs/app/building-your-application/routing/error-handling.

Add the Update TODO Page to the Route

To the “app” folder create a new folder called: “edit”. To the “edit” folder create a new folder called: “[id]”. In that folder create a new file called: “page.tsx”. Give it the following code:

"use client";
import UpdateTodoPage from '@/features/todos/pages/updateTodoPage';

export default UpdateTodoPage

Test the entire Application

At this point you should be able to test out the entire application. All create, read, update, delete and list functionality.

Conclusion

NextJS is certainly a powerful platform to create full stack web applications. While it does provide functionality to create server side rendered pages, it also provides functionality to create static site, and single page applications. This demonstration was mostly in regards to a single page application. All routing and rendering were performed client side, while the fetching of data from the database was all performed via REST Calls. Combining Prisma and RTK Query you are able to use the same models for REST Responses, and do not need to manually rekey them in. If you follow this practice, when the database changes, the types that both the client and server recognize will change as well.

GitHub Repo: https://github.com/woodman231/nextjs-prisma-todos-rest

Parts:

Part 1 – Create a Development Container, Create the Next App and Install Required Dependencies
Part 2 – Configure the REST Server Features
Part 3 – Configure the REST Client Features

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Part 3 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>
255207
PART 2 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) http://www.blackslatesoftware.com/part-2-full-stack-todos-application-with-nextjs-prisma-using-sql-server-and-redux-toolkit-rtk/ Fri, 02 Jun 2023 20:02:29 +0000 http://www.blackslatesoftware.com/?p=255179 The post PART 2 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>

Part 2 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK)

The purpose of this document series is to describe the steps that are necessary to create a “Todos” application using NextJS, Prisma, and Redux Toolkit (RTK).

NextJS is an exciting development tool to create web applications because they allow your Server Code and Client Code to be in the same repository. It is essentially like having a NodeJS server application and REACT Application in the same repository.

TLDR, show me the code: https://github.com/woodman231/nextjs-prisma-todos-rest

TOPIC: Configure the REST Server Features

I will be organizing the code in such a way that features of the application will be in feature directories and provide models and code for other services throughout the application.

Configure the “common” features

To the “next-app” directory create a new directory called: “features”. To the “features” directory create a new directory called “common”.

Configure the re-usable PrismaClient

The first thing that we will need to interact with the database is an instance of the PrismaClient. We also do not want to create more than one instance of the client as it does do its own connection pooling. This will prevent us from having to manually connect and disconnect from the database.

Create a new directory called: “prisma” inside of this “common” directory. Inside the “prisma” directory create a new file called: “prismaClient.ts”. Give it the following code:

import { PrismaClient } from "@prisma/client";

let applicationPrismaClient: PrismaClient | null = null;

function getPrismaClient() {
    if (!applicationPrismaClient) {
        applicationPrismaClient = new PrismaClient();
    }

    return applicationPrismaClient;
}

export const prismaClient = getPrismaClient();
Essentially what we are doing here is setting the variable that will be storing the PrismaClient instance to null. Then when it is first called it creates an instance of it. Later calls to this function will return the initialized variable.

Configure the common parameter validators

We will be validating requests like /api/todos/1 often. But what if someone tries to go to /api/todos/one? Well as we know “one” is not a number, and therefore not valid to pass along as a where clause to the sql server. So, let’s set up a common way to handle that.

Create a “params” directory within the “common” directory. Create an “idParams.ts” file and give it the following code:

export interface IDParams {
    params: {
        id: string;
    }
}
Create a new directory under “common” called: “paramValidators”. To the “paramValidators” directory create a new file called: “idParamaterValidator.ts”. Give it the following code:
import { IDParams } from "../params/idParams";

export const idParamaterValidator = ({ params }: IDParams): { isValid: boolean, errorMessage?: string } => {
    const id = Number(params.id);
    if (Number.isNaN(id)) {
        return { isValid: false, errorMessage: `${params.id} is not a number` };
    }

    return { isValid: true };
}

Configure the common REST Response Models, and REST Responses

In this application I want to have every REST Response return with either a data key, or an error key. The data will be of a type that will be defined later, and the error will just have a message key inside of it.

Create a new directory inside of common called: “restResponseModels”. Create a new file called: “restApplicationErrorRespoinseModel.ts”. Give it the following code:

export interface RestApplicationErrorResponseModel {    
    message: string;    
}
Create a new fille in the “restResponseModels” directory called: “restApplicationResponseModel.ts”. Give it the following code:
import { RestApplicationErrorResponseModel } from "./restApplicationErrorResponseModel";

export interface RestApplicationResponseModel<T> {
    data?: T;
    error?: RestApplicationErrorResponseModel;
}
Depending on your experience level with TypeScript that “T” might look new to you. Essentially what we are doing is taking a Type as a parameter and using that Type as the Type that the data key will be. When we use this model we will be doing things like the following code (this is just some example code, and does not belong anywhere in the solution).
interface SomeDataType {
    someData: string;    
}

let results: RestApplicationResponseModel<SomeDataType> = {};
let errorOccured = false;

if (errorOccured) {
    results.error = {
        message: "some error"
    }
} else {
    results.data = {
        someData: "some data"
    }
}
Errors do occur and can be part of our responses. For NextJS we just need to return a NextResponse with some optional data and a status code. Since we already setup that we will be responding with an object that can have an error key with a message key inside of it, let’s put together an error response builder function to be used with some of the various errors we can expect to have to return from our application.

Create a new directory within the common directory called: “restResponses”. Create a new file called: “restErrorResponseBuilder.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { NextResponse } from "next/server"
import { RestApplicationErrorResponseModel } from "../restResponseModels/restApplicationErrorResponseModel"
import { RestApplicationResponseModel } from "../restResponseModels/restApplicationResponseModel"

export function restErrorResponseBuilder(initialMessage: string, statusCode: StatusCode): (additionalDetails?: string) => NextResponse {
    return function <T>(additionalDetails?: string): NextResponse {
        let finalMessage = initialMessage;

        if (additionalDetails) {
            finalMessage += `: ${additionalDetails}`
        }

        const errorMessage: RestApplicationErrorResponseModel = {
            message: finalMessage,
        }

        const restResponse: RestApplicationResponseModel<T> = {
            error: errorMessage
        }

        return NextResponse.json(restResponse, { status: statusCode })
    }
}
Essentially this is a function that returns a function. We are also using type parameters again. Essentially the builder provides the initial error message, and status code. The internal function allows the developer to optionally give some additional details. If there are additional details supplied then the final error message will be a concatenation of the “initialMessage” and the “additionalDetails”. Otherwise the error message is just the initial message. The NextResponse is returned with the appropriate status code.

Now that we have a common way to build errors. Let’s build a couple of them that we will be using.

In this “restResponses” directory create a new file called: “badRequestErrorResponse.ts” and give it the following code:

import { StatusCode } from "status-code-enum"
import { restErrorResponseBuilder } from "./restErrorResponseBuilder";

export const badRequestErrorResponse = restErrorResponseBuilder("Bad Request", StatusCode.ClientErrorBadRequest)
As mentioned earlier we will be using Zod for schema validation. In our application that mostly means we will be validating the input from the user via REST Request Bodies. Let’s build a common way to convert the issues from Zod to Bad Request REST Responses.

Create a new file called: “badRequestErrorResponseFromZodIssues.ts” and give it the following code:

import {ZodIssue} from "zod";
import {badRequestErrorResponse} from "./badRequestErrorResponse";

export const badRequestErrorResponseFromZodIssues = (issues: ZodIssue[] | undefined) => {
    if(issues) {
        const additionalDetails: string[] = [];
        issues.forEach(issue => {
            if (issue.code === "invalid_union") {
                const { unionErrors } = issue;
                if (unionErrors) {
                    unionErrors.forEach(unionError => {
                        unionError.issues.forEach(issue => {
                            const errorMessageString = `${issue.message} for ${issue.path.join(".")}`;
                            if (!additionalDetails.includes(errorMessageString)) {
                                additionalDetails.push(`${issue.message} for ${issue.path.join(".")}`);
                            }
                        });
                    });
                }
            } else {
                const errorMessageString = `${issue.message} for ${issue.path.join(".")}`;
                if (!additionalDetails.includes(errorMessageString)) {
                    additionalDetails.push(`${issue.message} for ${issue.path.join(".")}`);
                }
            }
        });
    
        return badRequestErrorResponse(additionalDetails.join(". "));
    } else {
        return badRequestErrorResponse();
    }
}
Sort of a lot going on here, but essentially, we are using the badRequestError that we created earlier, and just looping over all of the Zod issues to create one cohesive string of “additionalDetails” for that badRequestErrorResponse.

If the user requests /api/todos/1, and 1 has been deleted, then we will want to respond with a not found error.

Create a new file called: “notFoundErrorResponse.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { restErrorResponseBuilder } from "./restErrorResponseBuilder";

export const notFoundErrorResponse = restErrorResponseBuilder("Not Found", StatusCode.ClientErrorNotFound)
And lastly when we don’t know what went wrong, let’s do our internal server error response.

Create a new file called: “internalServerErrorResponse.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { restErrorResponseBuilder } from "./restErrorResponseBuilder";

export const internalServerErrorResponse = restErrorResponseBuilder("Internal Server Error", StatusCode.ServerErrorInternal);
Ok now that we have covered some of the most common error responses. Let’s start to focus on the happy path.

When an object is created in the database we want to respond with a created response.

Create a new file called: “createdResponse.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { NextResponse } from "next/server"
import { RestApplicationResponseModel } from "../restResponseModels/restApplicationResponseModel"

export function createdResponse<T>(data: T): NextResponse {
    const restResponse: RestApplicationResponseModel<T> = {
        data: data
    }

    return NextResponse.json(restResponse, { status: StatusCode.SuccessCreated })
}
When an object is deleted from the database we want to respond with a no content response.

Create a new file called: “noContentResponse.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { NextResponse } from "next/server"

export function noContentResponse(): NextResponse {
    const nextResponse = new NextResponse(null, { status: StatusCode.SuccessNoContent });    

    return nextResponse;
}
Notice how this was a little different. We had to give a “null” as the first parameter to the constructor of the NextResponse class, and then the status code of no content instead of using the json method on the NextResponse class.

Finally, when the user requests a list of objects from the database, the details of one object, or an update request for an object succeeds we want to give them the response we hope to give them which is the ok response.

Create a new file called: “okResponse.ts”. Give it the following code:

import { StatusCode } from "status-code-enum"
import { NextResponse } from "next/server"
import { RestApplicationResponseModel } from "../restResponseModels/restApplicationResponseModel"

export function okResponse<T>(data: T): NextResponse {
    const restResponse: RestApplicationResponseModel<T> = {
        data
    }

    return NextResponse.json(restResponse, { status: StatusCode.SuccessOK })
}

Configure the common request handler

Now that we have all of our responses ready. Let’s use them in a common way. Most REST Requests that are handled on the application layer will attempt to do some validation, and if not valid, respond with an invalid request response. If the request is deemed valid, then an attempt to do the operation requested is taken. If something goes wrong during that portion of the request handling it will respond with some sort of an internal server error. If everything goes right then it will respond with the appropriate data and status code.

Create a new file called: “restRequestHandlerBuilder.ts” and give it the following code

import { NextRequest, NextResponse } from "next/server";
import { badRequestErrorResponse } from "../restResponses/badRequestErrorResponse";
import { badRequestErrorResponseFromZodIssues } from "../restResponses/badRequestErrorResponseFromZodIssues";
import { internalServerErrorResponse } from "../restResponses/internalServerErrorResponse";
import { ZodIssue } from "zod";

interface RestRequestValidationResult<RequestBodyType> {
    success: boolean;
    validatedRequestBody?: RequestBodyType;
    issues?: ZodIssue[];
}

interface ValidatedRequestDetailsParams<ParamsType, RequestBodyType> {
    validatedRequestBody?: RequestBodyType;
    params?: ParamsType;
}

export interface RestRequestHandlerBuilderOptions<ParamsType, RequestBodyType> {
    onValidateParams?: (params: ParamsType) => { isValid: boolean, errorMessage?: string };
    onValidateRequestAsync?: (req: NextRequest) => Promise<RestRequestValidationResult<RequestBodyType>>;
    onValidRequestAsync: (req: NextRequest, details?: ValidatedRequestDetailsParams<ParamsType, RequestBodyType>) => Promise<NextResponse>;    
}

export function restRequestHandlerBuilder<ParamsType, RequestBodyType>(options: RestRequestHandlerBuilderOptions<ParamsType, RequestBodyType>) {
    return async (req: NextRequest, params:ParamsType): Promise<NextResponse> => {
        try {
            let isValidRequest: boolean = false;
            let details: { validatedRequestBody?: RequestBodyType, params?: ParamsType} = {};

            if (options.onValidateParams) {
                const { isValid, errorMessage } = options.onValidateParams(params);
                if (!isValid) {
                    if (errorMessage) {
                        return badRequestErrorResponse(errorMessage);
                    }

                    return badRequestErrorResponse("invalid params");
                }

                details.params = params;
            }            

            if(options.onValidateRequestAsync) {
                const validation = await options.onValidateRequestAsync(req);
                if (!validation.success) {
                    const { issues } = validation;
    
                    return badRequestErrorResponseFromZodIssues(issues);
                } else {
                    details.validatedRequestBody = validation.validatedRequestBody;
                    isValidRequest = true;
                }                
            } else {
                isValidRequest = true;
            }

            if(isValidRequest) {
                const response = await options.onValidRequestAsync(req, details);                
                return response;
            } else {
                return badRequestErrorResponse();
            }

        } catch (error) {            
            if(error instanceof Error) {                
                return internalServerErrorResponse(error.message);
            }

            return internalServerErrorResponse();
        }
    }
}
A lot going on here. First of all remember that “T” from earlier? Well it doesn’t have to just be the letter “T” to be a type parameter. You can call that type parameter whatever you want. You can even specify multiple parameter types by separating them with comma’s within the angled brackets.

This is probably best explained with starting from the third interface. The “RestRequestHandlerBuilderOptions” which takes two type parameters for the ParamsType and RequestBodyType. That interface then requires the developer to define three functions. An onValidateParams, onValidateRequestAsync, and onValidaRequestAsync. The “onValidateParams” and “onValidateRequestAsync” are optional, as they will likely not be used when requesting lists of data. The onValidRequestAsync method is required as it will be used to issue the response. Each function requires a return value specified by the other two interfaces in the code, and those type parameters that were used in the initial RestRequestHandlerBuilderOptions are passed along to those other two interfaces. This ensures type safety and that the developer using this builder will be forced to return the correct type of data or else a compiler warning will happen.

The “restRequestHandlerBuilder” function has two type parameters. One for ParamsType and another for the RequestBodyType. Bear in mind that even “unknown” or “any” are technically valid types that a developer can use for this function. Let’s step through the function that this function returns. It returns a Promise which the NextJS Application router will need to send its response.

It has a try… catch setup and then validates the request with the onValidateParams and onValidateRequestAsync methods that were provided from the builder options. If the request is not valid then this function responds with the appropriate badRequestErrorResponse that we created earlier. If the request is valid then it gets the response data from the onValidaRequestAsync function which provides the validated details to the developer. If anything goes wrong with this request then the error is caught and an internalServerErrorResponse is done.

Configure the “todo” features

The features that we will be configuring will be selectors from the prisma client (I.E. The SELECT portion of the SQL statement). The return types of those selectors, REST Responses based on those return types, and request handlers to use those REST Responses.

Configure the Prisma selectors

To the “features” directory create a new directory called: “todos”. To the “todos” directory create a new directory called: “prismaSelectors”. To the “prismaSelectors” directory create a new file called: “todoSelector.ts”. Give it the following code:

import { Prisma } from '@prisma/client';

export const todoSelector = {
    id: true,
    title: true,
    dueDate: true,
    done: true,
} satisfies Prisma.TodoSelect;

Configure the Prisma Selector return type, I.E. Data types

This file defines which fields from the Todo table we will be selecting with prisma. That TodoSelect type was generated internally by Prisma because of our Schema file.

Create a new directory called: “prismaPayloads”. To the “prismaPayloads” directory create a new file called: “todoPayload.ts”. Give it the following code:

import { Prisma } from '@prisma/client';
import { todoSelector } from "../prismaSelectors/todoSelector";

export type TodoPayload = Prisma.TodoGetPayload<{ select: typeof todoSelector }>;
This will be the data type “T” for our okRestResponse. Basically, this creates a Type for us that will have the selected properties that we are requesting from our selector. You could imagine that this code is pretty much the same as this code (this is example code and does not belong anywhere in the solution).
export type TodoPayload = {
    id: number;
    title: string;
    dueDate: Date;
    done: boolean;
};
To illustrate further. Let’s say the selector was as follows (removing the dueDate key, again this code does not belong in the solution):
import { Prisma } from '@prisma/client';

export const todoSelector = {
    id: true,
    title: true,
    done: true,
} satisfies Prisma.TodoSelect;
Then the “export Type TodoPayload = Prisma.TodoGetPayload<{select: typeof todoSelector}>;” would return a type defined with this code (with the dueDate key removed, again, this code does not belong in the solution):
export type TodoPayload = {
    id: number;
    title: string;    
    done: boolean;
};

Configuring the REST Response Models

Create a new directory called: “restResponseModels”. To the “restResponseModels” directory create a new file called: “todoDetailsResponseModel.ts”. Give it the following code:

import { TodoPayload } from "../prismaPayloads/todoPayload";
import { RestApplicationResponseModel } from "../../common/restResponseModels/restApplicationResponseModel";

export type TodoDetailsResponseModel = RestApplicationResponseModel<TodoPayload>;
In the “restResponseModels” directory create a new file called: “todosListResponseModel.ts”. Give it the following code:
import { TodoPayload } from "../prismaPayloads/todoPayload";
import { RestApplicationResponseModel } from "../../common/restResponseModels/restApplicationResponseModel";

export type TodosListResponseModel = RestApplicationResponseModel<TodoPayload[]>;
Essentially what we are doing in both file is grabbing the TodoPayload from this feature, and the RestApplicationResponseModel from the common feature, and then exporting a type based on the RestApplicationResponseModel and providing a data type “T” of either the TodoPayload or an array of the TodoPayload. You could now imagine that the todoDetailsResponseModel now looks like this type (this code does not belong anywhere in the solution):
export type TodoDetailsResponseModel = {
    data?: {
        id: string;
        title: string;
        dueDate: Date;
        done: boolean;        
    },
    error?: {
        message: string;
    }
}

Configuring the REST Responses

Now that we have our response models, it’s time to generate our responses based on these models.

Within the “todos” directory create a new directory called: “restResponses”. To the “restResponses” directory create a new file called: “todoDetailsResponse.ts”. Give it the following code:

import { NextResponse } from "next/server";
import { okResponse } from "../../common/restResponses/okResponse";
import { TodoPayload } from "../prismaPayloads/todoPayload";

export const todoDetailsResponse = (todo: TodoPayload): NextResponse => {
    return okResponse(todo);
}
In the “restResponses” directory create a new file called: “listOfTodosResponse.ts”. Give it the following code:
import { NextResponse } from "next/server";
import { okResponse } from "../../common/restResponses/okResponse";
import { TodoPayload } from "../prismaPayloads/todoPayload";

export const todosListResponse = (todos: TodoPayload[]): NextResponse => {
    return okResponse(todos);
}

Configure the REST Request Handlers

Now it’s time to put these responses to use.

Create a new directory within the “todos” directory called: “restRequestHandlers”. To the “restRequestHandlers” directory create a new file called: ” getListOfTodosRequestHandler.ts”. Give it the following code:

import { NextRequest } from "next/server";
import { prismaClient } from "@/features/common/prisma/prismaClient";
import { todosListResponse } from "../restResponses/listOfTodosResponse";
import { todoSelector } from "../prismaSelectors/todoSelector";
import { restRequestHandlerBuilder, RestRequestHandlerBuilderOptions } from "@/features/common/restRequestHandlers/restRequestHandlerBuilder";

const getListOfTodosRequestHandlerBuilderOptions: RestRequestHandlerBuilderOptions<undefined, undefined> = {
    onValidRequestAsync: async (req: NextRequest) => {
        const todos = await prismaClient.todo.findMany({ select: todoSelector });

        return todosListResponse(todos);
    }
}

export const getListOfTodosRequestHandler = restRequestHandlerBuilder(getListOfTodosRequestHandlerBuilderOptions);
Let’s break this down a little bit. We have imported the NextRequest class from the “next/server” package. We also included our prismaClient, our TodosListResponse, todoSelector, the restRequestHandlerBuilder, and RestRequestHandlerBuilderOptions from without our project.

We then created an object based on the RestRequestHandlerBuilderOptions which we provided two “undefined” types. Remember these types are for the request parameters that we are expecting, and the request body that we are expecting. For this list of todos which will be “/api/todos”, there are no parameters to validate, nor is there a request body to validate, which is why we used the “undefined” type. The only method that we defined was the onValidRequestAsync, in which we use our prismaClient to findMany todos, and select the fields we defined in the todoSelector earlier. We then respond with the todosListResponse with the todos we found in the database. We then provide the requestBuilderOptions to the requestBuilder and return the results of that function (that creates a function) that will later be used with the nextjs application router.

Now let’s create a handler to get one todo, aka /api/todos/1.

Create a new file within the “restRequestHandlers” directory called: ” getTodoDetailsRequestHandler.ts”. Give it the following code:

import { NextRequest } from "next/server";
import { prismaClient } from "@/features/common/prisma/prismaClient";
import { todoSelector } from "../prismaSelectors/todoSelector";
import { todoDetailsResponse } from "../restResponses/todoDetailsResponse";
import { notFoundErrorResponse } from "../../common/restResponses/notFoundErrorResponse";
import { restRequestHandlerBuilder, RestRequestHandlerBuilderOptions } from "@/features/common/restRequestHandlers/restRequestHandlerBuilder";
import { IDParams } from "@/features/common/params/idParams";
import { idParamaterValidator } from "@/features/common/paramValidators/idParamaterValidator";

const getTodoDetailsRequestHandlerBuilderOptions: RestRequestHandlerBuilderOptions<IDParams, undefined> = {    
    onValidateParams: idParamaterValidator,

    onValidRequestAsync: async (req: NextRequest, details) => {
        if (details && details.params) {
            const { params } = details.params;
            const id = Number(params.id);
            const todo = await prismaClient.todo.findUnique({ where: { id: id }, select: todoSelector });

            if (todo) {
                return todoDetailsResponse(todo);
            } else {
                return notFoundErrorResponse();
            }
        } else {
            throw new Error("Params were not defined");
        }
    },
};

export const getTodoDetailsRequestHandler = restRequestHandlerBuilder(getTodoDetailsRequestHandlerBuilderOptions);
We pretty much had the same imports as last time. Only this time we included a few more. The first one is the IDParams type from the common feature, as well as the idParamaterValidator from the common feature. This time to the builder we provide the IDParams as the first type as those are the ParamaterTypes for the builder. To the onValidateParams method of the builder we just use the idParamaterValidator which means that if someone tries /api/todos/one they will receive an error stating that “one” is not a number.

With the onValidRequestAsync the request, and the validatedDetails are provided. The params key of the details object is optional so you still need to check for it here in this function. That portion of the “throw new Error…” is potentially unreachable code because of how the restRequestHandler handles it being null or undefined. However the compiler won’t let us get away with it, or we have to continually use our properties as “details?.params”, or “details?.validatedRequestBody”. But I prefer to do the truthy check instead of using those question marks.

To the “restRequestHandlers” directory create a new file called: “createTodoRequestHandler.ts”. Give it the following code:

import { Prisma } from "@prisma/client";
import { prismaClient } from "@/features/common/prisma/prismaClient";
import { NextRequest } from "next/server";
import { todoSelector } from "../prismaSelectors/todoSelector";
import { todoDetailsResponse } from "../restResponses/todoDetailsResponse";
import { TodoCreateInputObjectSchema } from "../../../prisma/generated/schemas/objects/TodoCreateInput.schema";
import { restRequestHandlerBuilder, RestRequestHandlerBuilderOptions } from "@/features/common/restRequestHandlers/restRequestHandlerBuilder";

const createTodoRequestHandlerBuilderOptions: RestRequestHandlerBuilderOptions<undefined, Prisma.TodoCreateInput> = {
    onValidateRequestAsync: async (req: NextRequest) => {
        const requestBody = await req.json();
        const validation = TodoCreateInputObjectSchema.safeParse(requestBody);

        if (!validation.success) {
            const { errors } = validation.error;
            return { success: false, issues: errors };
        } else {            
            return { success: true, validatedRequestBody: validation.data };
        }
    },

    onValidRequestAsync: async (req: NextRequest, details) => {                
        if(details && details.validatedRequestBody) {
            const createArgs: Prisma.TodoCreateArgs = {
                data: details.validatedRequestBody,
                select: todoSelector
            };
    
            const todo = await prismaClient.todo.create(createArgs);
    
            return todoDetailsResponse(todo);
        } else {
            throw new Error("Validated request body is undefined");
        }
    },
};

export const createTodoRequestHandler = restRequestHandlerBuilder(createTodoRequestHandlerBuilderOptions);
In this method we finally use the Zod validation schemas that were created for us earlier. Because we cannot control exactly what data is going to be provided in the request body, we do certainly want to validate it before it goes to the prismaClient. This requestHandlerBuilderOptions and the onValidateRequestAsync method we read the requestBody as json. We then provide those results to the TodoCreateInputObjectSchema from the generated schema objects that the zod prisma generator made for us. There is a lot going on there. But essentially what it is doing is making sure that the requestBody json matches the Prisma.TodoCreateInput type. It also does not allow for over posting of fields too.

When you holder your mouse over TodoCreateInput you will see that the type is defined like this

Therefore, if someone were to post this request body, it would be invalid because of the extra property.
{
    "title": "My Title",
    "dueDate": "2023-05-27",
    "done": false,
    "foo": "bar"
}
Furthermore, a payload like this would fail because of a missing property.
{
    "title": "My Title"    
}
This is also the same logic that we will be using for the update request except we will be using the Prisma.TodoUpdateInput type instead of the Prisma.CreateInput type.

Create a new file called: “updateTodoRequestHandler.ts”. Give it the following code:

import { Prisma } from "@prisma/client";
import { prismaClient } from "@/features/common/prisma/prismaClient";
import { NextRequest } from "next/server";
import { todoSelector } from "../prismaSelectors/todoSelector";
import { todoDetailsResponse } from "../restResponses/todoDetailsResponse";
import { IDParams } from "@/features/common/params/idParams";
import { idParamaterValidator } from "@/features/common/paramValidators/idParamaterValidator";
import { TodoUpdateInputObjectSchema } from "@/prisma/generated/schemas/objects/TodoUpdateInput.schema";
import { restRequestHandlerBuilder, RestRequestHandlerBuilderOptions } from "@/features/common/restRequestHandlers/restRequestHandlerBuilder";

const updateTodoRequestHandlerBuilderOptions: RestRequestHandlerBuilderOptions<IDParams, Prisma.TodoUpdateInput> = {
    onValidateParams: idParamaterValidator,

    onValidateRequestAsync: async (req: NextRequest) => {
        const requestBody = await req.json();
        const validation = TodoUpdateInputObjectSchema.safeParse(requestBody);

        if (!validation.success) {
            const { errors } = validation.error;
            return { success: false, issues: errors };
        } else {
            return { success: true, validatedRequestBody: validation.data };
        }
    },

    onValidRequestAsync: async (req: NextRequest, details) => {
        if (details && details.params && details.validatedRequestBody) {
            const id = Number(details.params.params.id);

            const updateArgs: Prisma.TodoUpdateArgs = {
                where: { id: id },
                data: details.validatedRequestBody,
                select: todoSelector
            };

            const todo = await prismaClient.todo.update(updateArgs);

            return todoDetailsResponse(todo);
        } else {
            throw new Error("Validated request body is undefined, or params are undefined.");
        }
    }
}

export const updateTodoRequestHandler = restRequestHandlerBuilder(updateTodoRequestHandlerBuilderOptions);
This is by far the most complex one since we do define all three methods in our RestRequestHandlerBuilderOptions object. We also define both type parameters. However if you think about the concepts we used to build the getTodoDetailsRequestHandler, and the createTodoRequestHandler it is basically concepts from both files in to one. Here again we will bail out if someone requested /api/todos/one. Also, if the person did not include necessary properties or added additional properties to their request body we will tell them that the request is not valid, and why. If we believe everything is valid then we attempt to perform the operation on the database. If the operation is not successful the internal server error will be given, otherwise if everything works as expected then we will return the data that we would like to have our application respond with.

Finally, we want to be able to delete tasks.

Create a new file called: “deleteTodoRequestHandler.ts”. Give it the following code:

import { NextRequest } from "next/server";
import { prismaClient } from "@/features/common/prisma/prismaClient";
import { noContentResponse } from "../../common/restResponses/noContentResponse";
import { restRequestHandlerBuilder, RestRequestHandlerBuilderOptions } from "@/features/common/restRequestHandlers/restRequestHandlerBuilder";
import { IDParams } from "@/features/common/params/idParams";
import { idParamaterValidator } from "@/features/common/paramValidators/idParamaterValidator";

const deleteTodoRequestHandlerBuilderOptions: RestRequestHandlerBuilderOptions<IDParams, undefined> = {
    onValidateParams: idParamaterValidator,

    onValidRequestAsync: async (req: NextRequest, details) => {
        if (details && details.params) {
            const { params } = details.params;
            const id = Number(params.id);
            await prismaClient.todo.delete({ where: { id: id } });

            return noContentResponse();
        } else {
            throw new Error("Params were not defined");
        }
    }
};

export const deleteTodoRequestHandler = restRequestHandlerBuilder(deleteTodoRequestHandlerBuilderOptions);

Configure the API Routes

Now that we have our todo request handlers defined. Let’s put them in a place that NextJS will actually read and use them.

In the “next-app” directory, and the “app” directory inside of that create a new directory called: “api”. In the “api” directory create a new directory called “todos”. In the “todos” directory create a new file called route.ts. Give it the following code:

import { getListOfTodosRequestHandler } from '@/features/todos/restRequestHandlers/getListOfTodosRequestHandler'
import { createTodoRequestHandler } from '@/features/todos/restRequestHandlers/createTodoRequestHandler'

export {
    getListOfTodosRequestHandler as GET,
    createTodoRequestHandler as POST
}
This means that getListOfTodosRequestHandler will respond to a GET request for /api/todos, and that createTodoRequestHandler will response to a POST request for /api/todos.

In the “todos” directory create a new directory called: “[id]”. To this “[id]” directory create a new file called: “route.js”. Give it the following code:

import { getTodoDetailsRequestHandler } from "@/features/todos/restRequestHandlers/getTodoDetailsRequestHandler"
import { updateTodoRequestHandler } from "@/features/todos/restRequestHandlers/updateTodoRequestHandler"
import { deleteTodoRequestHandler } from "@/features/todos/restRequestHandlers/deleteTodoRequestHandler"

export {
    getTodoDetailsRequestHandler as GET,
    updateTodoRequestHandler as PUT,
    deleteTodoRequestHandler as DELETE
}
This means that getTodoDetailsRequestHandler will respond to a GET request to /api/todos/[id], updatteTodoRequestHandler will respond to a PUT request to /api/todos/[id], and deleteTodoRequestHandler will respond to a DELETE request to /api/todos/[id].

The directory layout should look something like this:

Test the API Routes

You are now free to test the API Routes using Postman or any other REST Tester that you like. By running “npm run dev” within the next-app directory of the project. I will provide a few screen shots and descriptions of the tests that I did.

Bad request when trying to send extra properties that are not apart of the Todo model:

Bad request with title as the wrong type of data.
Success with the done property defined.
Success without the done property defined.
Success when requesting a list of todos.
Bad request when trying to get /api/todos/notanumber, like /api/todos/one
Not found when trying to get /api/todos/idnotindatabase, like /api/todos/3
Success when providing a known id to /api/todos/[id]
Bad request when providing an extra key to the request body on update
Bad request when providing the wrong data type to title
Success with valid request body on PUT to valid id
Bad request when sending not a number to the /api/todos/[id] route for PUT
Bad request when sending not a number to the /api/todos/[id] for DELETE
Success when providing a valid number to /api/todos/[id] for DELETE

Conclusion

NextJS has several great ways to create a REST API. IN this demonstration we integrated Zod and Prisma for Request Validation. Furthermore, Prisma is being used to communicate between our application and the database. By using builders, we can keep our code DRY (Don’t Repeat Yourself). We will also be able to use the models that we created earlier in our client application which we will demonstrate in the next part of this series.

Parts:

Part 1 – Create a Development Container, Create the Next App and Install Required Dependencies
Part 2 – Configure the REST Server Features
Part 3 – Configure the REST Client Features

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post PART 2 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>
255179
Part 1 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) http://www.blackslatesoftware.com/part-1-full-stack-todos-application-with-nextjs-prisma-using-sql-server-and-redux-toolkit-rtk/ Fri, 02 Jun 2023 16:26:13 +0000 http://www.blackslatesoftware.com/?p=255154 The post Part 1 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>

Part 1 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK)

The purpose of this document series is to describe the steps that are necessary to create a “Todos” application using NextJS, Prisma, and Redux Toolkit (RTK).

NextJS is an exciting development tool to create web applications because they allow your Server Code and Client Code to be in the same repository. It is essentially like having a NodeJS server application and REACT Application in the same repository.

TLDR, show me the code: https://github.com/woodman231/nextjs-prisma-todos-rest

TOPIC: Create a Development Container, Create the Next App and Install Required Dependencies

This first steps involves created a development environment that is independent of dependencies on your local computer. Instead the dependencies will be provided within the development container, so that developers that we share the code with will not need to make modifications to their computer to join us on our adventure.

Create the Development Container

To get started I will be creating a new directory on my computer called “nextjs-prisma-todos-rest”.

Open the directory in visual studio code.

Within Visual Studio Code do a CTRL+SHIFT+P to bring up the command pallet and select the options to “Dev Containers: Add Dev Container Configuration Files.”

Next, select the “C# (.NET) and MS SQL” option (I know we will be using NodeJs really, but this is a good template to start from since it has MS SQL preconfigured for us
Now use the default options to create your files.

Once complete, you should have a directory structure similar to this:

To the devcontainer.json file — change the name field to be “Nodejs and MS SQL”.

To the Dockerfile — update the first line of FROM to be…

FROM mcr.microsoft.com/devcontainers/javascript-node:0-18
Once your files have been saved. Open the Visual Studio Command Pallet (CTRL+SHIFT+P) and select the option for “Dev Containers: Rebuild and Reopen in Container.”
It will take a minute or more to create the dev container. You can click the show log if you like.

Once it is done your docker desktop should look something like this:

Confirm connectivity to MS SQL Server

Let’s confirm our connection to the SQL server before we go much further.

In VS Code click on the SQL Server button on the left.

Double click on the mssql-container button:
When prompted enter in the password from the devcontainer.json file. When prompted accept the server certificate. If it doesn’t work right away you may be prompted to retry, confirm that the server is “localhost,1433”, no default database, the username of “sa”, the password of “P@ssw0rd”, keep the default name of “mssql-container” and yes you do want to accept the server certificate.

Once connected we can see that there is already a Database with no table called “ApplicationDB”. For our demonstration we will use this database.

Create the NextJS App

Let’s start by creating our next app. Do a CTRL+SHIFT+` to bring up a terminal. These terminal commands will be issued in the devcontainer and not your local computer. Execute the following command:

npx create-next-app@latest
For purposes of this demonstration, I selected the following options:
When that is complete you should have a directory structure like this:
There is a weird problem with using nextjs in a development container where the Hot Module Reloading (HMR) doesn’t work properly out of the box. To fix that, edit the next.config.js file to have the following code.
/** @type {import('next').NextConfig} */
const nextConfig = {
    webpack: function (config, context) {
        config.watchOptions = {
            poll: 1000,
            aggregateTimeout: 300,
        };

        return config;
    }
}

module.exports = nextConfig

Install and Initialize Prisma

In the terminal execute the following commands:

npm install prisma --save-dev
npm install @prisma/client --save
npx prisma init
Doing this modified our package.json to save prisma as a development dependency and the prisma client to be a regular dependency. Furthermore, after initializing prisma we now have a .env file and a prisma directory with a schema.prisma file.
The default initialization assumes postgres. Since we are using sql server we need to set some things straight. Open the .env file and change the DATABASE_URL to the following:
DATABASE_URL="sqlserver://localhost:1433;database=ApplicationDB;user=sa;password=P@ssw0rd;encrypt=true;trustServerCertificate=true"
That username and password should look familiar to you based on the contents of the devcontainer.json file, and the exercise we went through to connect to the dev containers sql database.

Now is the time for us to create our Todos table.

Edit schema.prisma to have the following code:

// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "sqlserver"
  url      = env("DATABASE_URL")
}

model Todo {
  id Int @id @default(autoincrement())
  title String @db.VarChar(255)
  dueDate DateTime @db.DateTime
  done Boolean @default(false)
}
Notice that we also changed the provider in the datasource to “sqlserver” instead of “postgres”; however the URL we left as the same environment variable.

For more information about writing the schema file see: https://www.prisma.io/docs/concepts/components/prisma-schema

NOTE: I would like to point out that at this time prisma only officially supports one “schema.prisma” file. You can add as many table definitions as you like to this file. There is a third-party solution available on NPM called prismix at https://www.npmjs.com/package/prismix/v/1.0.19, as well as the associated GitHub repository @ https://github.com/jamiepine/prismix. (At the time of this writing this prismix github repository is in public archive mode. Which means it is no longer being maintained by the developer. There is a thread on GitHub requesting an official solution, but it doesn’t look like it has been fulfilled. You can check this link to see if has been updated: https://github.com/prisma/prisma/issues/2377.)

For the purposes of this demonstration we will stick to the single file, but I did think it was important to point out this limitation, and the risk with using the third part solution that is no longer maintained.

Once all of the files are saved it is now time to create our first migration. Execute the following code:

npx prisma migrate dev --name init
The CLI should output something like this:
If we refresh the SQL Server connection it should now look like this:
Do a right-click and “Select top 1000” on both the migrations and Todos tables to review the results
The field names have the same case of field names that we used in our prisma.schema file. If your organization has different field naming standards than what I demonstrated here, that is fine, you can adjust the code to fit that.

Install and Initialize Zod Schema Validation

Zod is a Typescript-first schema declaration and validation Library. While Prisma alone generates TypeScript types that we can use through out or application. It does not supply validation. We want to be sure that the REST Requests that we receive are valid for the data we are going to pass along to the database request.

Execute the following command:

npm install prisma-zod-generator
Modify the prisma/schema.prisma file to include the zod generator.

The file should now look like this:

// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

generator client {
  provider = "prisma-client-js"
}

generator zod {
  provider = "prisma-zod-generator"
}

datasource db {
  provider = "sqlserver"
  url      = env("DATABASE_URL")
}

model Todo {
  id Int @id @default(autoincrement())
  title String @db.VarChar(255)
  dueDate DateTime @db.DateTime
  done Boolean @default(false)
}
Execute the following command:
npx prisma generate
We should now have the following files:
I will put a pin in that for now and we will use some of these files later.

Conclusion

Creating a development environment that includes the database that you will be using as well as the node version that you will be using will streamline the onboarding process for other developers as they come on to help with the project.

Parts:

Part 1 – Create a Development Container, Create the Next App and Install Required Dependencies
Part 2 – Configure the REST Server Features
Part 3 – Configure the REST Client Features

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams, available on your schedule and configured to achieve success as defined by your requirements independently or in co-development with your team. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post Part 1 – Full Stack Todos Application with NextJS, Prisma (using SQL Server), and Redux Toolkit (RTK) appeared first on Black Slate.

]]>
255154
What Is The Power Platform and Why Consider It? (Manager Edition) http://www.blackslatesoftware.com/what-is-the-power-platform-and-why-consider-it-manager-edition/ Fri, 02 Jun 2023 14:30:55 +0000 http://www.blackslatesoftware.com/?p=255135 The post What Is The Power Platform and Why Consider It? (Manager Edition) appeared first on Black Slate.

]]>

What is the Power Platform and Why Consider It? (Manager Edition)

What is the Microsoft Power Platform? Why use it at your organization? How do I use the tools? Is it available to you? What are the disadvantages of using it? What’s the best way to get started? I answer some of the basic questions here.

Over the next five weeks, I will be publishing blog articles going into greater depth of each of the five Power Platform tools. But first, what is it?

What is the Microsoft Power Platform?

The Power Platform is a small collection of low-code/no-code tools that expedite the development of business apps while utilizing the latest cloud and artificial intelligence (AI) tools.

At the time of writing, the Power Platform consists of these five (5) tools:

    • Power Apps
    • Power Automate
    • Power BI
    • Power Pages
    • Power Virtual Agents

These tools enable you and your team to create applications much faster than typically done with traditional software development tools and methodologies. Your staff does not need to be well-versed in software coding. However, the solutions can be customized by someone with more coding skills.

The Power Platform can connect to nearly all other (Microsoft and alternative) tools and platforms.

Here are a few examples:

    • Using Power Apps, you can create an interactive app that connects to and displays data together from SQL Server, Salesforce, and an Oracle database hosted in AWS.
    • Using Power Pages, you can create a website using a predefined template with little web programming skills.
    • Using Power BI, you can create reports or interactive dashboards with visual data on a refresh cycle.
    • Microsoft is adding AI to all of its platforms and tools and the Power Platform is not an exception. Take advantage of these technologies!

Why use it at your organization?

By using the Power Platform, you save two things minimally – time and money.

You save time and money in these ways:

    • Saving time, solutions can be created much faster than by using traditional software development methodologies.
    • Saving money, less will be spent on developers who require less programming skills. Low code development does not require a lot of programming experience.
    • Saving money, the cost of employee/contractor time spent creating solutions will be shorter.

How do I use the tools?

Each of the five tools has a specific purpose.

Here is how all five tools are used.

    • Power Apps – Create desktop, tablet, and mobile apps using drag and drop with limited coding and view them on any platform. With over 900 connectors available, you can pull/push the data from virtually any application or data source.
    • Power BI – Create reports and interactive dashboards using the latest UI visuals. Create the reports using the Power BI Desktop and publish them to the Power BI Service.
    • Power Automate (previously Microsoft Flow) – Create and automate workflows that can perform repetitive tasks that can be triggered to run automatically based on schedules, events, and manually run.
    • Power Pages (previously Power Portal) – can be used to create a website using one of the many available templates with very little coding.
    • Power Virtual Agents – From data dictionaries, create virtual chat agents for employees or customers to ask questions for information and access other online resources.

Microsoft is constantly refining these tools. They may even add more to this list.

Is it available to you?

Your organization probably already has Microsoft 365 for employees. With it, they get the Power Platform tools. You can discover what has already been created and see where you can create your own solutions.

To give it a try, log into your web browser (I recommend Edge) using your work account and then try clicking these generic links below:


It’s interesting that the URLs are not uniform across the toolset. It is because they were created at different times by different teams.

I suggested my brother try this at his workplace and he was a little nervous he would be probing where he didn’t belong. “Hey – all he was doing was opening web pages!”

Note: To make your own apps, you may want to first create a personal environment to contain your work. If you have rights, you can create your own environment. I created one using the “sandbox” model at this link: https://admin.powerplatform.microsoft.com

What are the disadvantages of using it?

There are a lot of advantages to using Microsoft’s “low code/no code” platform. Users with little programming skills can create simple solutions fast and the cost of development and maintenance is affordable. However, there are some things you should be aware of before considering it.

What to be aware of prior to consideration:

    • Confusing pricing – Although the prices are low, be sure to fully understand the pricing model of the Power Platform tools you are considering. Know that certain connectors may require a premium subscription level.
    • Limited customization – Microsoft owns the infrastructure of the Power Platform. Depending on which tool(s) you select, some customization can be performed through advanced coding. If this is required, check if the customizations needed are possible and who in your organization will perform them.
    • Limited access – Some Power Platform apps are intended to be used primarily by employees or partners of your organization. Know who the audience will be and how that fits with the tool you select.

Microsoft is constantly refining these tools. They may even add more to this list.

What’s the best way to get started?


    • Start small! Choose the correct Power Platform tool and create a simple application with limited requirements/customizations.
    • Carefully select a tech-savvy user to create the solution.
    • Learn the costs of development and its monthly costs for implementation.
    • It’s most likely your organization is already using the Power Platform. Using the links above, you will see what’s already being used. Connect with the employees who are already using it and learn how it’s been working for them.
    • Be sure your developer(s) connect with the other employees who have created solutions in the past.
    • Hire a consultant who can help you get started. They can suggest the best tools for the job and how to implement them with your existing infrastructure.

Conclusion

In the next few weeks we will cover the five tools in greater detail, each designed to help you experience the benefits of the Power Platform. Check back and you’ll find the links below as they are completed and posted.

About Black Slate

Black Slate is a Software Development Consulting Firm that provides single and multiple turnkey software development teams and co-development resources, available on your schedule and configured to achieve success as defined by your requirements. Black Slate teams combine proven full-stack, DevOps, Agile-experienced lead consultants with Delivery Management, User Experience, Software Development, and QA experts in Business Process Automation (BPA), Microservices, Client- and Server-Side Web Frameworks of multiple technologies, Custom Portal and Dashboard development, Cloud Integration and Migration (Azure and AWS), and so much more. Each Black Slate employee leads with the soft skills necessary to explain complex concepts to stakeholders and team members alike and makes your business more efficient, your data more valuable, and your team better. In addition, Black Slate is a trusted partner of more than 4000 satisfied customers and has a 99.70% “would recommend” rating.

The post What Is The Power Platform and Why Consider It? (Manager Edition) appeared first on Black Slate.

]]>
255135