Category: APPS

  • Finding New Pastures: Big Nerd Ranch’s Next Chapter

    Finding New Pastures: Big Nerd Ranch’s Next Chapter

    [ad_1]

    It is with a mix of emotions that we announce the upcoming sunsetting of some key aspects of Big Nerd Ranch and the transition of others. For over 23 years, we’ve had the privilege of empowering aspiring programmers through our immersive bootcamps and books. From the iconic ranch in south Georgia to the late-night coding sessions, Big Nerd Ranch has fostered a unique and beloved community for anyone looking to grow and learn new technology.This decision hasn’t been an easy one. The landscape of tech education has evolved significantly since our inception. While Big Nerd Ranch has always strived to adapt, the current environment necessitates a more substantial shift.

    What this means:

    • Corporate and Team Training: We are still supporting our corporate partners and are continuing to offer training services with the BNR name under our company Stellar Elements.
    • Bootcamps: We are not planning any future classes with open enrollment.
    • Books: We are not planning on releasing any new editions of our books. Current editions will be available for the foreseeable future but will go out of print over time.
    • Community: We understand the strong bonds formed at the Ranch. Our existing resources will still remain up for the time being, but they will begin to transition or wind down in the coming months.

    A Legacy of Learning

    Big Nerd Ranch has impacted thousands of lives. We’ve witnessed shy beginners blossom into confident programmers, and the stories of our alumni continue to inspire us. We are incredibly proud of the role we’ve played in shaping the tech landscape.

    Looking Ahead

    While this chapter for Big Nerd Ranch comes to a close, the passion for learning never fades. We encourage our students, alumni, and the broader tech community to keep exploring, keep coding, and keep pushing the boundaries of what’s possible.We are incredibly grateful for your support over the years. It has been an honor to be a part of your coding journey.

    With appreciation,

    The Big Nerd Ranch Team



    [ad_2]

    Source link

  • Using the Navigation Component in Jetpack Compose

    Using the Navigation Component in Jetpack Compose

    [ad_1]

    Jetpack Compose is a declarative framework for building native Android UI recommended by Google. To simplify and accelerate UI development, the framework turns the traditional model of Android UI development on its head. Rather than constructing UI by imperatively controlling views defined in XML, UI is built in Jetpack Compose by composing functions that define how app data is transformed into UI.

    An app built entirely in Compose may consist of a single Activity that hosts a composition, meaning that the fragment-based Navigation Architectural Components can no longer be used directly in such an application. Fortunately, navigation-compose provides a compatibility layer for interacting with the Navigation Component from Compose.

    The Navigation Component

    The androidx.navigation:navigation-compose dependency provides an API for Compose apps to interact with the Navigation Component, taking advantage of its familiar features, including handling up and back navigation and deep links.

    The Navigation Component consists of three parts:NavController,NavHost, and the navigation graph.

    The NavController is the class through which the Navigation Component is accessed. It is used to navigate between destinations and maintains each destination’s state and the back stack’s state. An instance of the NavController is obtained through the rememberNavController() method as shown:

    val navController = rememberNavController()

    The NavHost, as the name indicates, serves as a host or container for the current navigation destination. The NavHost also links the NavController with the navigation graph (described below). Creating a NavHost requires an instance of NavController, obtained through rememberNavController() as described above, and a String representing the route associated with the starting point of navigation.

    NavHost(navController = navController, startDestination = "home") {
       ...
    }

    In the fragment-based manifestation of the Navigation Component, the navigation graph consists of an XML resource that describes all destinations and possible navigation paths throughout the app. In Compose, the navigation graph is built using the lambda syntax from the Navigation Kotlin DSL instead of XML. The navigation graph is constructed in the trailing lambda passed to NavHost as shown below:

    NavHost(navController = navController, startDestination = "home") {
       composable("home") { MealsListScreen() }
       composable("details") { MealDetailsScreen() }
    }
    

    In this example, the MealsListScreen() composable is associated with the route defined by the String “home,” and the MealDetailsScreen() composable is associated with the “details” route. The startDestination is set to “home,” meaning that the MealsListScreen() composable will be displayed when the app launches.

    Note that in the example above, the lambda is passed to the builder parameter of the NavHost function, which has a receiver type of NavGraphBuilder. This allows for the concise syntax for providing composable destinations to the navigation graph through NavGraphBuilder.composable().

    The NavGraphBuilder.composable() method has a required route parameter that is a String representing each unique destination on the navigation graph. The composable associated with the destination route is passed to the content parameter using trailing lambda syntax.

    The navigate method of NavController is used to navigate to a destination:

    navController.navigate("details")

    While it may be tempting to pass the NavController instance down to composables that will trigger navigation, it is best practice not to do so. Centralizing your app’s navigation code in one place makes it easier to understand and maintain. Furthermore, individual composables may appear or behave differently on different screen sizes. For example, a button may result in navigation to a new screen on a phone but not on tablets. Therefore it is best practice to pass functions down to composables for navigation-related events that can be handled in the composable that hosts the NavController.

    For example, imagine MealsListScreen takes an onItemClick: () -> Unit parameter. You could then handle that event in the composable that contains NavHost as follows:

    NavHost(navController = navController, startDestination = "home") {
       composable("home") {
          MealsListScreen(onItemClick = { navController.navigate("details") })
       }
    ...
    }
    

    Arguments can be passed to a navigation destination by including argument placeholders within the route. If you wanted to extend the example above and pass a string representing an id for the details screen, you would first add a placeholder to the route:

    NavHost(navController = navController, startDestination = "home") {
       ...
       composable("details/{mealId}") { MealDetailsScreen(...) }
    }
    

    Then you would add an argument to composable, specifying its name and type:

    composable(
       "details/{mealId}",
       arguments = listOf(navArgument("mealId") { type = NavType.StringType })
    ) { backStackEntry ->
       MealDetailsScreen(...)
    }
    

    Then, you would need to update calls that navigate to the destination by passing the id as part of the route:

    navController.navigate("details/1234")

    Finally, you would retrieve the argument from the NavBackStackEntry that is available within the content parameter of composable():

    composable(
     "details/{mealId}",
     arguments = listOf(navArgument("mealId") { type = NavType.StringType })
    ) { backStackEntry ->
     MealDetailsScreen(mealId = backStackEntry.arguments?.getString("mealId"))
    }
    

    One of the key benefits of using the Navigation Component is the automatic handling of deep links. Because routes are defined as strings that mimic URIs by convention, they can be built to correspond to the same patterns used for deep links into your app. Carrying forward with the example above and assuming that it is associated with a fictitious web property at https://bignerdranch.com/cookbook you would first add the following intent filter to AndroidManifest.xml to enable the app to receive the appropriate deep links:

    <intent-filter>
       <action android:name="android.intent.action.VIEW" />
       <category android:name="android.intent.category.DEFAULT" />
       <category android:name="android.intent.category.BROWSABLE" />
       <data
          android:host="bignerdranch.com"
          android:pathPrefix="/cookbook"
          android:scheme="https" />
    </intent-filter>
    

    Then you would update your composable destination to handle deep links of the pattern https://bignerdranch.com/cookbook/{mealId} by passing a value to the deepLinks parameter as shown:

    composable(
       "details/{mealId}",
       arguments = listOf(navArgument("mealId") { type = NavType.StringType }),
       deepLinks = listOf(navDeepLink { uriPattern = "https://bignerdranch.com/cookbook/{mealId}" })
    ) { backStackEntry ->
     MealDetailsScreen(mealId = backStackEntry.arguments?.getString("mealId"))
    }
    

    These deep links could be tested using an ADB command such as:

    adb shell am start -d https://bignerdranch.com/cookbook/1234

    In the above demonstrations, string literals were used to define routes and navigation argument names for clarity and simplicity. It is best practice to store these strings as constants or in some other construct to reduce repetition and prevent typo-based bugs. A cleaner implementation of the above example might look like this:

     

    interface Destination {
       val route: String
       val title: Int
    }
    
    object Home : Destination {
       override val route: String = "home"
       override val title: Int = R.string.app_name
    }
    
    object Details: Destination {
       override val route: String = "details"
       override val title: Int = R.string.meal_details
       const val mealIdArg = "mealId"
       val routeWithArg: String = "$route/{$mealIdArg}"
       val arguments = listOf(navArgument(mealIdArg) { type = NavType.StringType })
       fun getNavigationRouteToMeal(mealId: String) = "$route/$mealId"
    }
    
    ...
    
    NavHost(
       navController = navController,
       startDestination = Home.route
    ) {
       composable(Home.route) {
          MealsListScreen(onItemClick = {
            navController.navigate(Details.getNavigationRouteToMeal(it))
         })
    }
    
       composable(
          Details.routeWithArg,
          arguments = Details.arguments
    ) { backStackEntry ->
        MealDetailsScreen(
          mealId = backStackEntry.arguments?.getString(Details.mealIdArg) ?: ""
        )
      }
    }
    

    Lack of argument type safety

    The primary drawback is the lack of type safety for passing arguments. While this may not seem like a big deal if you are following the best practice of not passing complex data in navigation arguments, it would still be preferable to have compile-time assurance, even for simple types.

    Repetitive and cumbersome API for passing arguments

    In addition to the lack of type safety, the API for defining argument types and parsing them from the BackStackEntry is fairly repetitive and cumbersome. It involves a fair amount of potentially tricky string concatenation to build routes.

    No navigation editor

    Many developers have grown to enjoy using the Navigation Editor to get a visual representation of the navigation graph for their apps and to quickly and easily define navigation actions. There is no comparable tool for Compose.

    Use Fragments to host Compose

    Perhaps the most straightforward alternative, especially if you’re already accustomed to the fragment-based Navigation component, would be to use Fragments to host each screen-level composable. This would carry the benefit of type-safe navigation arguments and access to the Navigation Editor.

    Third-party alternatives

    As a result of the drawbacks above, several third-party tools, such as Compose Destinations and Voyager have been developed. For a detailed overview and comparison of these alternatives, we recommend this article.



    [ad_2]

    Source link

  • Technical Writing 101: Working with SMEs 

    Technical Writing 101: Working with SMEs 

    [ad_1]

    One of the unique things about technical writing is that we are often tasked with documenting processes, codebases, software, and more that we don’t fully understand. Because of this, technical writers document things about which they have a strong working knowledge but lack complete mastery.

    So, how do technical writers fill the knowledge gap? That’s where Subject Matter Experts (SMEs) come in. Working with them is one of the key functions of being a technical writer. And, if properly handled, it is one of the most rewarding and fun aspects of being a technical writer.

    What is an SME?  

    Subject Matter Experts have advanced knowledge and specialized skills in a specific field. For example, when it comes to software engineering, an SME may have advanced knowledge in Android development, iOS development,  machine learning, and so on. They are authorities in a specific area, so they are qualified (and often asked) to give guidance around strategy, uses, implementation, and learning in the field in which they’re an SME.  

    They’re needed and used in a wide range of industries, from technology to energy to chemicals to transportation, and SMEs remain SMEs by continuing to grow and learn in their given field. An SME is an industry-agnostic role and concept, but we often see them in technical fields. As a technical writer at a tech consulting firm, the SMEs I engage with are usually focused on software engineering.  

    Why do technical writers work with SMEs?  

    To be a technical writer is to be curious. And with that comes the endless desire to learn. As technical writers, we are constantly learning. We are brought in to document things we are familiar with but not fluent in. Luckily, there is often an SME who is fluent in what we, technical writers, are familiar with and tasked to document. Enter the SME/technical writer pairing.  

    These pairings are used on various projects—from internal work to user-facing work and consulting. A technical writer and an SME are paired to fill knowledge and skill gaps. An SME might be an incredible engineer but lack confidence in their writing abilities. As for the technical writer, they might be an amazing writer, but their hard engineering skills might not be at a high level. Thus, the technical writer and SME combine to fill one another’s knowledge and skill gaps. Through doing so, they offer a lot of value and bring a lot to both their work.  

    For technical writer and SME pairings to do stellar work, they need to know how to best work with one another. I can only speak to this from a technical writer standpoint, but the tips I will share go both ways.  

    How to work best with SMEs 

    Communicate early and often. 

    This tip sounds a bit obvious, but in the fast-paced environments we often work in, communication can fall by the wayside. So, be intentional about communicating with the SME(s) that you work with—whether that is asynchronous communication through Slack and email or in Zoom and in-person meetings—and always leave ample time for discussion before any work is due. And what I mean by that is to bring to light any concerns, gaps in knowledge, potential blockers, and more before they become issues that could slow your work down or hinder your work in any way.  

    Be kind.  

    Once again, this might sound obvious. But friendliness goes a long way! A technical writer’s relationship with an SME is often somewhat one-sided insofar as we often ask way more of them than they do of us. From questions about new API changes to double-checking basic code snippets we write, SMEs do a lot for us. So, it is essential to be kind. And better yet, get to know them on a personal level. This is important with any coworkers, but the ones you work the closest with should be individuals that you can get to know on a personal level.  

    An SME is not a walking knowledge bank. Don’t treat them as such, and don’t reach out only when you need help. You likely have shared interests and commonalities. Get to know them, and you never know. You might make a new friend or mentor.  

    Ask questions.  

    As a technical writer, you need to ask your SME(s) a lot of questions about what you’ve been tasked with documenting. This is because you need to translate the knowledge that lives in your SME’s head into consumable words. People say there is no such thing as a bad question, and while that is true, there are bad ways to ask questions.

    It is also important to do your homework before asking questions. For example, if you have a question about a code snippet an SME passed to you, run it in an IDE first to get a better understanding of it before you ask your question. A little legwork can help you answer your own questions. Ask questions clearly, break a question into multiple simpler questions if it is complex, and ask them in such a way that you and an SME can work toward an answer together.  

    Knowledge transfers between an SME and a technical writer often start with the technical writer asking questions or reaching out for guidance and support. This is important because SMEs can teach us so much and asking active questions (where examples can be created or talked through) is a great way to learn, and important questions should never wait. Waiting until the last minute to ask an SME about a project-related question puts undue stress on them to get you an answer immediately, and rushing to an answer or teaching something rapidly might not lead to the best solution. Be diligent, timely, thoughtful, and thorough with your question-asking, and SMEs will repay you with brilliant answers and knowledge transfers.  

    Be transparent about skillsets and boundaries.  

    A technical writer and an SME should always be open and honest about their skillsets and the places where they may start to feel out of their depth. This also applies to roles and responsibilities. There might be some overlap between tasks between a technical writer and an SME, but letting one another do what they do best and not overstepping is very important. You’ve been paired together for a reason: to highlight one another’s strengths and fill in one another’s weaknesses.  

    Openly discuss and share these strengths and weaknesses so you and your SME know how to best work with and support each other. Knowing your strengths and weaknesses, you can do your best work together. And more importantly, you can start learning from one another. An SME and technical writer pairing is as much a soft mentorship as a business practice. Make the most of it.  

    In conclusion 

    As a technical writer, being paired with an SME can prove to be one of the most fun and fruitful experiences in your career. You need to be intentional about how to go about that experience, and I’ve shared some tips on how to make the most of it. Any project where I am paired with an SME is a joy for me because I get to learn from the best while also doing impactful work.

    Next time you find yourself working with an SME, try these tips. They will help you build rapport more quickly—and collaborate more effectively.  



    [ad_2]

    Source link

  • The Definition of Done – Big Nerd Ranch

    The Definition of Done – Big Nerd Ranch

    [ad_1]

     

    Definition of Done is nothing but a comprehensive checklist of necessary, value-added activities for the quality of a feature.

    – Scrum Alliance

    The value of defining the Definition of Done is to ensure that everyone involved in a software development project, including the development team, product owner, and stakeholders, has a common understanding of what constitutes a complete and potentially releasable increment of work.

    The Definition of Done checklist ensures that everyone involved in the project has a common understanding throughout the story’s life cycle. Checklist items get checked off at each of the phases.

    The following are the phases of feature stories, from ready to estimate to done.

    Ready to estimate

    • Story written by the product owner
    • Acceptance criteria is understood
    • Story accepted by technical lead
    • Candidate design is present

    The product owner is responsible for defining the requirements and ensuring that they are met. At the same time, the development team is responsible for implementing the requirements and ensuring that they meet the desired quality standards.

    Acceptance criteria should be written in a user story format, using clear and concise language. They should include both functional and non-functional requirements.

    Note: It is important to revisit and revise acceptance criteria throughout the development process, as requirements may change or become clearer over time.

    Ready to play

    • Story is assigned to sprint backlog.
    • Story has been reviewed by team.
    • Story has been estimated.
    • Story has been moved to active sprint board.

    In development

    • Story is actively being worked on by a developer to bring it to QA.
    • Story is being constructed by developer(s) actively working to complete it.
    • Story is not blocked.

    During this phase:

    • New unit tests are written.
    • Existing unit tests pass.
    • ACs have passed on the development environment.
    • The dev team approves PR.
    • Code is merged.
    • Dev has added comments to the story demonstrating the ACs have passed.

    This includes:

    • Versions and screenshots, especially for new UIs.

    Alert QA if the code changes could affect other application areas and should be tested. Devs will meet with QA to do a handoff, answer questions, and resolve bug items.

    Note: When Dev creates a bug, add appropriate reproducible steps with proof of screencaps or video clip attachments.

    Ready to Test

    • Tech lead has verified the dev’s work meets established standards.
    • Developer has completed all work necessary for the deliverable.
    • Story is positioned for work on QA to start.

    In Test (QA)

    • QA is working on the story.
    • ACs are stable, clear, concise, and meet business requirements.

    During this phase:

    • Review the user story!
      Write executable test cases (ensuring each test case corresponds to an acceptance criterion and all possible scenarios and edge cases are covered).
    • Functional and non-functional scenarios.
    • Collaborate with the product owner and the development team.

    This includes: Notes, screenshots, test environment details, build, and versions—bugs are logged, per procedure, for bug tracking.

    Done!

    • Story has been accepted by QA.
    • Story is “code complete.”

    There may be additional activities needed to deploy or release the story. However, limitations of testing have been noted. Any tech debt or discovered cases have been recorded for the future backlog.

    In conclusion, the Definition of Done is an essential tool for any development team. By meeting all these criteria, we can ensure that the user story is fully completed and ready to be released to the customer. This will help us deliver high-quality software that meets the needs of our users and stakeholders.



    [ad_2]

    Source link

  • Introduction to Svelte – Big Nerd Ranch

    Introduction to Svelte – Big Nerd Ranch

    [ad_1]

    Svelte is a great front-end Javascript framework that offers a unique approach to the complexity of front-end systems. It claims to differentiate itself from other popular frameworks, like React and Angular, by solving these problems rather than moving the complexity around. Some key advantages of Svelte are its impressive performance and speed—and its simple implementation that allows for slim files and less boilerplate. For example, with Svelte stores, you can implement full global state management in less than ten lines of code!

    Unlike other frameworks, Svelte does not run in the browser, nor does it use the virtual DOM. Instead, it uses a compiler that runs at build time to compile components into standalone JavaScript modules. This approach leads to faster and more efficient applications, making Svelte an excellent choice for building high-performance web applications.

    The best way to see all that Svelte offers is to build an app yourself!

    Follow along with us to build a Todo app and witness Svelte in action as you implement some of its key features.

     

    Getting Started

    We assume the reader has Node.js (version 16 or above), a code editor, and a browser installed.

    To initialize the todo app, run the following command in your terminal:

    npm create vite@4 svelte-todo -- --template=svelte-ts

    This command uses the build tool Vite to create and scaffold a Svelte project for you entitled svelte-todo.

    This example also adds the option at the end of the command to include TypeScript in your application.

    You may be prompted to install create-vite.

    Enter y to continue.

    To get your project running, follow the steps listed:

    • cd svelte-todo
    • npm install
    • npm run dev -- --open

    Congratulations! Your Svelte app is running in your browser at localhost:5173. Open the directory in your code editor. The Vite scaffolding adds a small demo app. To remove it, simply delete these files:

    • ./src/assets
    • ./src/lib
    • ./app.css

    Also, perform the following edits:

    • Delete the entire contents of ./src/App.svelte.
    • Remove the line import './app.css' from main.ts.

    Create a Component

    Let’s create your first Svelte component.

    • First, create a new folder inside the src directory called components.
    • Create a new file in the components folder called TodoItem.svelte.

    As the title implies, you are creating a to-do item. We know we will need a checkbox and a description of the item. Svelte components consist of three different parts: JavaScript, HTML, and CSS.

    That’s it.

    Start by adding the HTML.

    <div>
       <input type="checkbox" />
       <span>{item}</span>
    </div>

    In the code above, you are using familiar HTML tags. There is one question, though: how do you get access to item? Add JavaScript to your component by using script tags. By convention, these typically reside at the beginning of the file.

    <script lang="ts">
       export let item: string;
    </script>

    Variables that will be used by your HTML, like item, are declared inside your script tags. Then, you access them inside brackets like <span>{items}</span>.

    In this case, there is an extra keyword: export. This means that the value of item comes from an external source and will be passed into the component as a property. The lang="ts" is optional and denotes that you are using TypeScript inside the tag. That is everything you need for your first component. To see your component in the browser, you need to use it in your application.

    Open the file ./src/App.svelte and replace the previous contents of the file:

    <script>
       import TodoItem from './components/TodoItem.svelte';
       let todoItems = [
          'Generate Project Skeleton',
          'Create your first component',
          'Create a store',
          'Make a new todo items flow'
       ]
    </script>
    
    {#each todoItems as todoItem}
       <TodoItem item={todoItem} />
    {/each}

    Let’s look at the JavaScript inside the script tags first. The TodoItem component that you created is being imported. Anytime a component is used in another component, it must be imported inside the script tag.

    Next, you declare a new variable called todoItems. This is very similar to how the item variable was declared in the TodoItem component, with one big difference. There is no export keyword. This means that this variable is local to this component; the value is not coming from an external source. todoItems is an array that contains a list of items you will accomplish during this tutorial.

    Now, take a look at the HTML. This looks different than the HTML you have already seen. That’s because it uses a logic block. In Svelte, you can add logic to HTML by wrapping it in curly braces. The character # lets you know that a logic block is beginning, and the character / signifies the block’s end.

    Because todoItems is an array, you can use an each block to iterate through the array and complete an action with each of the items in the array. In this case, you are looking at each item and returning a TodoItem component. Notice that the item is being assigned to the TodoItem component as an attribute-this is how TodoItem has access to the item variable. Check out your application in the browser. You should have a list of items to check off as complete!

    Go ahead and check off Generate project skeleton and Create your first component—you’re halfway there!

    Stores

    Like other frameworks, including React, Svelte has tools for managing global and local state. Unlike the React Context API, the Svelte store does not rely on hierarchy. Components can consume state without being directly nested under provider components. The Svelte store also doesn’t even require that the calling code be Svelte. A store could be consumed through any plain JavaScript module.

    Svelte provides three types of stores out-of-the-box writable, readable, and derived. It also provides the ability to create a custom store (you can read more about that at Svelte), but in this tutorial, we will just use a writable store.

    Creating a store

    Inside /src, create a new directory and file /stores/todo.ts. Inside the new todo.ts file, start by importing the writable store type from svelte/store:

    import { writable } from 'svelte/store';

    Since this is TypeScript, we must define a type for our store. We could just use a string[], but we want to be able to mark items as done. So, let’s create a type that allows us to define a to-do item and maintain its done status.

    In this example, we’re going to export this type because we will need to import it later on in the tutorial, but you may not always need to do this.

    export type TodoItemType = {
       item: string;
       done: boolean;
    }

    Now we can define our store with the default to-do items we used earlier.

    export const todoItems = writable<TodoItemType[]>([
       { item: 'Generate project skeleton', done: true },
       { item: 'Create your first component', done: true },
       { item: 'Create a store', done: false },
       { item: 'Make a new todo items flow', done: false },
    ]);

    Consuming the store

    Navigate back to App.svelte. Since we’ll be pulling our todo items from a store, we will no longer need the local array todoItems. Let’s replace it with our new store.

    <script lang='ts'>
       import TodoItem from './components/TodoItem.svelte';
       import { todoItems } from './stores/todo';
    </script>

    Now our page is broken because we need to update our loop to use the item key inside the todoItem object we have in our store. Do that now.

    {#each todoItems as todoItem}
       <TodoItem item={todoItem.item} />
    {/each}

    Wait, our page still isn’t displaying our to-do items! That’s because the value of todoItems is a store. It’s not the value of our store. To get that value, we need to subscribe to the store. We can manage subscriptions manually with the subscribe and unsubscribe functions on the store, but this adds quite a bit of additional code.

    Thankfully, Svelte offers an easy way to “auto-subscribe” to a store. Auto-subscribing is as simple as prefixing our usage of the store with a $.

    {#each $todoItems as todoItem} 
       <TodoItem item={todoItem.item} /> 
    {/each}

    Look how clean that is!

    Writing to the store

    Let’s make a new component called AddTodoItem.svelte in the /components directory. This component will handle adding new items to our list. Before we interact with the store, let’s first create our UI. We will want a text input to type out our new item and a button to add it to the store.

    <input type="text"/>
    <button>Add</button>

    We need to maintain our input value locally. If you are familiar with React, you would typically do this by utilizing the useState hook and onChange or onBlur props, but in Svelte we use a concept called binding. We’ll start by defining the variable we want to bind the input value to in a script tag:

    <script lang='ts'>
       let todoItem = '';
    </script>

    Next, we will use bind to bind the value of the text input to todoItem.

    <input type='text' bind:value={todoItem} />
    

    Now the value of the variable will be synced with the value of the input, and we’ll be able to use it inside our click handler. So, let’s create a new function and assign it to the on:click event handler for the button.

    <script lang='ts'>
       let todoItem = '';
       const addTodoItem = () => {
          alert(todoItem);
       }
    </script>
    
    <input type="text" bind:value={todoItem} />
    <button on:click={addTodoItem}>Add</button>

    Before we can test this in the browser, we need to import and render this component in App.svelte.

    <script lang='ts'>
       import TodoItem from './components/TodoItem.svelte';
       import AddTodoItem from './components/AddTodoItem.svelte';
       import { todoItems } from './stores/todo';
    </script>
    
    <AddTodoItem />
    
    {#each $todoItems as todoItem}
       <TodoItem item={todoItem.item} />
    {/each}

    Check out your browser. Type a message in the text box and click “Add”. You should see a browser alert with the message you just typed.

    Nice work!

    To add a value to the store without overriding what is already there, we will use the update function that exists on the store object. This function takes a function as its only parameter. The parameter function will be passed the current value of the store. We can modify that value and return it to update the store.

    Update the script tag in AddTodoItem.svelte:

    <script lang='ts'>
      import { todoItems, type TodoItemType } from '../stores/todo';
      let todoItem = '';
    
      const addTodoItem = () => {
        todoItems.update((store: TodoItemType[]) => (
          [
            ...store,
            {
              item: todoItem,
              done: false
            }
          ]
        ));
      }
    </script>

    If you want to take it a step further, try updating TodoItem.svelte to toggle the “done” status in the store when a user checks the checkbox.

    Styling

    Styling in Svelte is scoped to the component the style is defined in. You can accomplish this functionality with other frameworks using things like CSS modules or styled-components, but with Svelte it is included out of the box. So, we don’t have to worry about clashing tag styles or accidentally re-using a class name. These styles also live inside the same file as the component code, typically at the end of the file.

    Let’s start by adding some padding to the list of items. Add the following code to the end of TodoItem.svelte:

    <style>
       div {
          padding-top: 10px;
       }
    </style>

    And some styles to AddTodoItem.svelte:

    <style>
      button {
        padding: 5px 8px;
        background-color: #b16326;
        color: white;
        border-radius: 5px;
        border: none;
      }
    
      button:hover {
        background-color: #e38d39;
        cursor: pointer;
      }
    
      input {
        padding: 5px 8px;
        border-radius: 5px;
        border: 1px solid black;
      }
    </style>

    Notice our styles assigned to the input tag do not affect the input we have rendered inside TotoItem.svelte.

    Conclusion

    We’ve gone over the basic concepts of Svelte component structure, binding, stores, and styling. If you want to take things further, take a look at the official Svelte interactive tutorial for some more advanced topics, and be on the lookout for more Svelte blog posts!



    [ad_2]

    Source link

  • Replacing Myself: Writing Unit Tests with ChatGPT

    Replacing Myself: Writing Unit Tests with ChatGPT

    [ad_1]

    The science-fiction future is among us as we find ourselves on the precipice of an AI revolution. As with all technological milestones, from the wheel to the computer, what we can do and how fast we can do it shift towards ever-increasing productivity. While the philosophers and keyboard warriors among us discuss the philosophical ramifications of AI, here at Big Nerd Ranch, we concern ourselves with more practical matters.

    In this series, I plan on investigating ChatGPT’s answer to one of the oldest questions in human history: How can I make my life easier?

    Recognizing that a robust suite of unit tests is essential for ensuring a healthy codebase isn’t rocket surgery. But let’s face it – writing tests can be tedious at times. If you’re anything like me, you’ve probably caught yourself daydreaming about having a trusty assistant to either take care of it all for you or, at the very least, lend a helping hand. We’re going to put ChatGPT through some tests today, starting with something easy to calibrate ourselves and moving on to testing a React reducer and React Component.

    Let’s walk before we run, shall we? We’ll ask it to write some unit tests for a recursive function to see if it gets tripped up. The simplest recursive function I can think of is the factorial, something we should have all seen in CS101:

    function factorial(n) {
      if (n === 0 || n === 1) {
        return 1;
      } else {
        return n * factorial(n - 1);
      }
    }
    

    Simple enough, right? Now we’ll ask for some unit tests:

    Prompt: Can you write me some unit tests for the following javascript function:

    test("factorial of 0 is 1", () => {
      expect(factorial(0)).toBe(1)
    })
    
    test("factorial of 1 is 1", () => {
      expect(factorial(1)).toBe(1)
    })
    
    test("factorial of 5 is 120", () => {
      expect(factorial(5)).toBe(120)
    })
    
    test("factorial of negative number throws an error", () => {
      expect(() => factorial(-1)).toThrow()
    })
    

    Impressive, ChatGPT not only accounted for the special case of 0 factorial but also considered the possibility of negative integers and real numbers as input.

    That gave us a decent idea as to how it will handle writing tests for a simple recursive function, now that we’re calibrated, let’s switch gears to a more practical example.

    The following code snippet is a React reducer for an online shopping cart:

    const cartReducer = (state, action) => {
      switch (action.type) {
        case "ADD_ITEM":
          return {
            ...state,
            items: [...state.items, action.payload],
            total: state.total + action.payload.price,
          };
        case "REMOVE_ITEM":
          const itemToRemove = state.items.find(
            (item) => item.id === action.payload.id
          );
          return {
            ...state,
            items: state.items.filter((item) => item.id !== action.payload.id),
            total: state.total - itemToRemove.price,
          };
        case "CLEAR_CART":
          return {
            ...state,
            items: [],
            total: 0,
          };
        default:
          throw new Error(`Invalid action type ${action.type}`);
      }
    };
    

    We can identify a few tests that need to be written by skimming through it. At a minimum, we’d want tests for adding items to the cart, removing them, and emptying the cart. Before we start writing anything ourselves, let’s turn to our trusty assistant, ChatGPT, and see if it agrees with us.

    Prompt: Can you write me some unit tests for the following react reducer:

    This is what ChatGPT generated:

    describe("cartReducer", () => {
      let initialState;
    
      beforeEach(() => {
        initialState = { items: [], total: 0 };
      });
    
      it("should handle ADD_ITEM action", () => {
        const action = { type: "ADD_ITEM", payload: { id: 1, name: "Item 1", price: 10 } };
        const newState = cartReducer(initialState, action);
        expect(newState.items.length).toEqual(1);
        expect(newState.items[0]).toEqual(action.payload);
        expect(newState.total).toEqual(action.payload.price);
      });
    
      it("should handle REMOVE_ITEM action", () => {
        const itemToRemove = { id: 1, name: "Item 1", price: 10 };
        const action = { type: "REMOVE_ITEM", payload: itemToRemove };
        const state = { items: [itemToRemove], total: itemToRemove.price };
        const newState = cartReducer(state, action);
        expect(newState.items.length).toEqual(0);
        expect(newState.total).toEqual(0);
      });
    
      it("should handle CLEAR_CART action", () => {
        const state = { items: [{ id: 1, name: "Item 1", price: 10 }], total: 10 };
        const action = { type: "CLEAR_CART" };
        const newState = cartReducer(state, action);
        expect(newState.items.length).toEqual(0);
        expect(newState.total).toEqual(0);
      });
    
      it("should throw an error for invalid action types", () => {
        const action = { type: "INVALID_ACTION" };
        expect(() => cartReducer(initialState, action)).toThrowError(
          "Invalid action type INVALID_ACTION"
        );
      });
    
      it("should throw an error for invalid action types", () => {
        expect(() => cartReducer(initialState)).toThrowError(
          "Invalid action type INVALID_ACTION"
        );
      });
    });
    

    Not only did it whip up some tests for adding, removing, and emptying the cart, but it also included tests for handling empty and invalid actions. Now all that’s left is to run these tests. However, keep in mind that since ChatGPT doesn’t have any knowledge of our specific file structure, we might need to make some minor adjustments in order to get everything running smoothly. Small price to pay I say.

    Let’s run these tests.

    Looks like everything passed except for one test. Looking through the cartReducer code we can see why the test failed: The code we originally wrote doesn’t seem to be built to handle the case where an empty action has been passed to it and as a result we’re getting a null reference exception when we attempt to throw the invalid action error. While it can be considered a mistake on ChatGPTs part to create that unit test, it did alert us to a potential unhandled exception in our cartReducer code.

    Let’s feed it a React component and see how it does. Here we have an Alert react component in Typescript.

    import React from "react";
    import PropTypes from "prop-types";
    
    const Banner = ({ title, subtitle, image, hidden }) => {
      return (
        <div className="banner">
          <img src={image} alt={title} hidden={hidden} />
          <div className="banner__content">
            <h1 className="banner__title">{title}</h1>
            <p className="banner__subtitle">{subtitle}</p>
          </div>
        </div>
      );
    };
    
    Banner.propTypes = {
      title: PropTypes.string.isRequired,
      subtitle: PropTypes.string.isRequired,
      image: PropTypes.string.isRequired,
      hidden: PropTypes.bool.isRequired,
    };
    
    export default Banner;
    

    Let’s see how ChatGPT handles it.

    Prompt: Can you write me some unit tests for the following React component:

    import React from "react";
    import { render, screen } from "@testing-library/react";
    import Banner from "./Banner";
    
    describe("Banner component", () => {
      it("renders the title", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const titleElement = screen.getByText("Test Title");
        expect(titleElement).toBeInTheDocument();
      });
    
      it("renders the subtitle", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const subtitleElement = screen.getByText("Test Subtitle");
        expect(subtitleElement).toBeInTheDocument();
      });
    
      it("renders the image with the correct alt text", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const imageElement = screen.getByAltText("Test Title");
        expect(imageElement).toBeInTheDocument();
        expect(imageElement).toHaveAttribute("src", "test.png");
      });
    
      it("hides the image when hidden prop is true", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={true}
          />
        );
        const imageElement = screen.getByAltText("Test Title");
        expect(imageElement).not.toBeInTheDocument();
      });
    });
    

    We’ll go ahead and run these tests.

    A single test failed. ChatGPT assumed that toBeInDocument would return false when an element is hidden. toBeInDocument checks for whether the element is in the tree, regardless of visibility, so in this case, it’s actually returning true. ChatGPT should have used toBeVisible instead. Let’s go ahead and make that change and see if we can get that test to pass.

    import React from "react";
    import { render, screen } from "@testing-library/react";
    import Banner from "./Alert";
    
    describe("Banner component", () => {
      it("renders the title", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const titleElement = screen.getByText("Test Title");
        expect(titleElement).toBeInTheDocument();
      });
    
      it("renders the subtitle", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const subtitleElement = screen.getByText("Test Subtitle");
        expect(subtitleElement).toBeInTheDocument();
      });
    
      it("renders the image with the correct alt text", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={false}
          />
        );
        const imageElement = screen.getByAltText("Test Title");
        expect(imageElement).toBeInTheDocument();
        expect(imageElement).toHaveAttribute("src", "test.png");
      });
    
      it("hides the image when hidden prop is true", () => {
        render(
          <Banner
            title="Test Title"
            subtitle="Test Subtitle"
            image="test.png"
            hidden={true}
          />
        );
        const imageElement = screen.getByAltText("Test Title");
        expect(imageElement).not.toBeVisible();
      });
    });
    

    There we have it, the tests are all passing!

    As with all AI-powered chatbots, certain limitations exist.

    Let’s identify a few of those weaknesses so we can sleep more soundly at night:

    1. Lack of Creativity: Chatbots may be capable of mimicking human-like conversation, but they lack true creativity. They are machines and cannot produce truly original output or come up with novel design patterns or hyper-efficient algorithms.
    2. Reliance on Training Data: The accuracy of AI is heavily dependent on the quality of its training data. If the data is out-of-date or inaccurate, it will severely affect the performance of the AI and, subsequently, its output.
    3. Need for Human Correction: Although a clever prompt engineer may be able to coax decent output from the AI, this article demonstrated some of the errors that ChatGPT can make. This presents an interesting trade-off, as I imagine that there is a relationship between the complexity of our code and the type of mistakes ChatGPT will make. This will require a keen eye. It was fortunate that the mistakes ChatGPT made in this exercise were easy to spot.

    While ChatGPT may not be ready to replace humans just yet, it’s clear that AI has the potential to revolutionize the way we live and work. As we continue to develop and use these tools, we can expect AI to become even more intelligent and capable. This presents an opportunity for developers to focus on the more challenging aspects of coding while leaving the repetitive tasks to our AI assistants.



    [ad_2]

    Source link

  • Asynchronously load images with customized AsyncImage view in SwiftUI

    Asynchronously load images with customized AsyncImage view in SwiftUI

    [ad_1]

    AsyncImage is a built-in SwiftUI view that asynchronously downloads and displays an image from a remote URL. It is designed to provide a smooth and performant user experience by downloading images asynchronously in the background while allowing the user to interact with the rest of the app.

    AsyncImage Basics

    To use AsyncImage, you simply provide a URL to the image you want to display, and AsyncImage takes care of the rest. It will show a placeholder image while the actual image is being downloaded and then update the view with the downloaded image when it’s available.

    The simplest way to use it is like so:

     AsyncImage(url: URL(string: "https://example.com/image.jpg")) { image in
        image
            .resizable()
            .aspectRatio(contentMode: .fit)
    } placeholder: {
        ProgressView()
    }

    As you can see in the example above, we provide a URL to the image we want to display and a closure that specifies how the downloaded image should be displayed (in this case, we make it resizable and set its aspect ratio). We also provide a placeholder view to be shown while the image is being downloaded (in this case, a ProgressView).

    Why would you need a custom AsyncImage view?

    While the built-in AsyncImage view in SwiftUI is quite powerful and versatile, there are times when you may need to create a custom version of the AsyncImage view to meet the specific requirements of your app. For example, in some cases, you may need a custom AsyncImage view that can load and display images from various sources, including remote URLs, local files, and captured images from the device’s camera.

    Custom loading behavior

    To create a custom AsyncImage view that can handle all three types of images, we can start by defining the ImageLoader that fetches the image from the source and emits image updates to a view.

    Handling various sources

    Let’s begin with the implementation of the loader:

    import SwiftUI
    import Combine
    import Foundation
    
    // 1
    enum ImageSource {
        case remote(url: URL?)
        case local(name: String)
        case captured(image: UIImage)
    }
    
    // 2
    private class ImageLoader: ObservableObject {
        private let source: ImageSource
    
        init(source: ImageSource) {
            self.source = source
        }
    
        deinit {
            cancel()
        }
        
        func load() {}
    
        func cancel() {}
    }

    Here is a breakdown of what is happening with the code:

    1. Define an enum ImageSource that can take in three different types of image sources: a remote URL, a local file name, and a captured image.
    1. Create an ImageLoader to bind image updates to a view.

    Handling different phases of the asynchronous operation

    Let’s implement image loading and cancelation.

    To provide better control during the load operation, we define an enum AsyncImagePhase (Similar implementation to Apple Documentation) to represent the different phases of an asynchronous image-loading process.

    In our example, we can define a Publisher in the ImageLoader that holds the current phase.

    // ...
    
    // 1
    enum AsyncImagePhase {
        case empty
        case success(Image)
        case failure(Error)
    }
    
    private class ImageLoader: ObservableObject {
        private static let session: URLSession = {
            let configuration = URLSessionConfiguration.default
            configuration.requestCachePolicy = .returnCacheDataElseLoad
            let session = URLSession(configuration: configuration)
            return session
        }()
    
        // 2
        private enum LoaderError: Swift.Error {
            case missingURL
            case failedToDecodeFromData
        }
        
        // 3
        @Published var phase = AsyncImagePhase.empty
    
        private var subscriptions: [AnyCancellable] = []
    
        // ...
    
        func load() {
            let url: URL
    
            switch source {
            // 4
            case .local(let name):
                phase = .success(Image(name))
                return
            // 5
            case .remote(let theUrl):
                if let theUrl = theUrl {
                    url = theUrl
                } else {
                    phase = .failure(LoaderError.missingURL)
                    return
                }
            // 6
            case .captured(let uiImage):
                phase = .success(Image(uiImage: uiImage))
                return
            }
    
            // 7
            ImageLoader.session.dataTaskPublisher(for: url)
                .receive(on: DispatchQueue.main)
                .sink(receiveCompletion: { completion in
                    switch completion {
                    case .finished:
                        break
                    case .failure(let error):
                        self.phase = .failure(error)
                    }
                }, receiveValue: {
                    if let image = UIImage(data: $0.data) {
                        self.phase = .success(Image(uiImage: image))
                    } else {
                        self.phase = .failure(LoaderError.failedToDecodeFromData)
                    }
                })
                .store(in: &subscriptions)
        }
    
        // ...
    }

    Here is a breakdown of what is happening with the code:

    1. Enum AsyncImagePhase defines a bunch of image loading states like empty, success, and failed.
    1. Define the potential loading errors.
    1. Define a Publisher of the loading image phase.
    1. For local images, simply create an Image view using the file name and pass it into the successful phase.
    1. For remote images, handle loading success and failure respectively.
    1. For captured images, simply create an Image view with the UIImage input parameter and pass it into the successful phase.
    1. Use the shared URLSession instance to load an image from the specified URL, and deal with loading errors accordingly.

    Implement the AsyncImage view

    Next, implement the AsyncImage view:

    // 1
    struct AsyncImage<Content>: View where Content: View {
    
        // 2
        @StateObject fileprivate var loader: ImageLoader
    
        // 3
        @ViewBuilder private var content: (AsyncImagePhase) -> Content
    
        // 4
        init(source: ImageSource, @ViewBuilder content: @escaping (AsyncImagePhase) -> Content) {
            _loader = .init(wrappedValue: ImageLoader(source: source))
            self.content = content
        }
    
        // 5
        var body: some View {
            content(loader.phase).onAppear {
                loader.load()
            }
        }
    }

    What this code is doing:

    1. Defines an AsyncImage view that takes a generic type Content which itself must conform to the View protocol.
    1. Bind AsyncImage to image updates by means of the @StateObject property wrapper. This way, SwiftUI will automatically rebuild the view every time the image changes.
    1. The content property is a closure that takes an AsyncImagePhase as input and returns a Content. The AsyncImagePhase represents the different states the image can be in, such as loading, success, or failure.
    1. The default initializer takes an ImageSource and the closure content as inputs, which lets us implement a closure that receives an AsyncImagePhase to indicate the state of the loading operation.
    1. In the body property, we start image loading when AsyncImage’s body appears.

    Custom Initializer

    By creating a custom AsyncImage view, you can customize its initializer to suit your specific needs. For example, you might want to add support for placeholder images that display while the image is still loading or the loading fails.

    extension AsyncImage {
    
        // 1
        init<I, P>(
            source: ImageSource,
            @ViewBuilder content: @escaping (Image) -> I,
            @ViewBuilder placeholder: @escaping () -> P) where
            // 2
            Content == _ConditionalContent<I, P>,
            I : View,
            P : View {
            self.init(source: source) { phase in
                switch phase {
                case .success(let image):
                    content(image)
                case .empty, .failure:
                    placeholder()
                }
            }
        }
    }
    1. This custom initializer for the AsyncImage view allows for the custom content and placeholder views to be provided.
    1. _ConditionalContent is how SwiftUI encodes view type information when dealing with ifif/else, and switch conditional branching statements. The type _ConditionalContent<I, P> captures the fact the view can be either an Image or a Placeholder.

    There are certain things you need to be aware of regarding _ConditionalContent:

    _ConditionalContent is a type defined in SwiftUI’s internal implementation, which is not meant to be accessed directly by developers. It is used by SwiftUI to conditionally render views based on some condition.

    While it is technically possible to reference _ConditionalContent directly in your SwiftUI code, doing so is not recommended because it is an internal implementation detail that may change in future versions of SwiftUI. Relying on such internal implementation details can lead to unexpected behavior or crashes if the implementation changes.

    Instead, you can refactor switch into a separate view using if statements or the @ViewBuilder attribute to achieve the same result without directly referencing the internal _ConditionalContent type. This approach is a safer and more future-proof way of conditionally rendering views in SwiftUI.

    Here’s an example of how to conditionally render a view using an if statement:

    struct DefaultAsyncImageContentView<Success: View, FailureOrPlaceholder: View>: View {
        var image: Image?
        @ViewBuilder var success: (Image) -> Success
        @ViewBuilder var failureOrPlaceholder: FailureOrPlaceholder
    
        init(image: Image? = nil, @ViewBuilder success: @escaping (Image) -> Success, @ViewBuilder failureOrPlaceholder: () -> FailureOrPlaceholder)     {
            self.image = image 
            self.success = success
            self.failureOrPlaceholder = failureOrPlaceholder()
        }
    
        var body: some View {
            if let image {
                success(image)
            } else {
                failureOrPlaceholder
            }
        }
    }
    
    extension AsyncImage {
        init<I, P>(
            source: ImageSource,
            @ViewBuilder content: @escaping (Image) -> I,
            @ViewBuilder placeholder: @escaping () -> P) where
            Content == DefaultAsyncImageContentView<I, P>, 
            I : View,
            P : View {
            self.init(source: source) { phase in
                var image: Image?
                if case let .success(loadedImage) = phase {
                    image = loadedImage
                }
                return DefaultAsyncImageContentView(image: image, success: content, failureOrPlaceholder: placeholder)
            }
        }
    }

    As you can see in the examples above, the custom initializers allow us to take complete control of all the steps of image presentation.

    Conclusion

    In summary, creating a custom AsyncImage view can give you more control over the loading, processing, and display of images in your SwiftUI app, and can help you meet the specific requirements of your app. Thanks for reading. I hope you enjoyed the post.



    [ad_2]

    Source link

  • Now Available React Programming: The Big Nerd Ranch Guide

    Now Available React Programming: The Big Nerd Ranch Guide

    [ad_1]

    Based on our React Essentials course, this book uses hands-on examples to guide you step by step through building a starter app and a complete, production-ready app, both crafted to help you quickly leverage React’s remarkable power. 

    Your production-ready app will include a cart, a login page, and other e-commerce features. 

    What do you need to know? 

    Before you dive into the book, you will want to be familiar with the basics of HTML and CSS. It will also help to have a foundational knowledge of JavaScript, although we will cover some JavaScript concepts throughout the book. You will use these three languages as you build your React application. No need to know anything about React as we will take you through it step by step. 

    What will you learn? 

    Components 

    You will build your application with functional components and JSX. Using components will allow you to reuse code across your application. Components also help organize your code in a logical way that makes it easier to understand and allows for the separation of concerns. 

    Hooks 

    You will learn how and when to use the most common React hooks: useEffect,  useState,  useReducer,  useRef, useContext,  useMemo,  and useCallback. You will have opportunities to compare similar hooks and choose the one to fit your situation best. Additionally, we will walk you through creating a custom hook so that you can further take advantage of all hooks have to offer. 

    Sharing State 

    You will use props and context to share the state between components and to keep up with the state between multiple pages. We will compare the trade-offs between the methods, so you can select which is best for each use case in your application. 

    API Integration 

    You will practice connecting your app to a server using both API requests and web sockets so that you can send and receive information. You will also learn how to display loading and error states to keep your users in the loop during API interactions. 

    Testing 

    You will learn how to thoroughly test your React components and flows using React Testing Library. You will also gain experience writing end-to-end tests with Cypress, a powerful testing framework for modern web applications. Additionally, we will cover setting up a mock server to test your application in a controlled environment. By comparing different testing methods, you will learn to choose the most effective approach for your needs. 

    Performance Tuning 

    Once you’ve completed the two applications and honed your React skills, you will dive into performance optimization. You will discover how to fine-tune your components to ensure your applications respond quickly to user interactions. You will analyze the bundle size of your application and use lazy loading to reduce the time to first contentful paint. You will analyze a large page that responds slowly to user input and use tools like memo, useMemo, useCallback, and useTransition to improve responsiveness. 

    Helpful Tools 

    There are several community libraries that you will use throughout this book. One of those will be React Router, which you will use to navigate your application. You will use PropTypes to specify the props coming into your components, so you will know if your components receive incorrect props. You will also use ESLint to check your application for coding errors and to enforce coding style. 

    Where can you buy the book? 

    You can order print and DRM-free electronic copies from InformIT. It is also available from Amazon and other booksellers. 

    If you want the whole Big Nerd Ranch experience, enroll in one of our bootcamps or bring us on-site with our corporate training. 

    Loren Klingman

    Author
    Big Nerd Ranch

    Loren Klingman is a full-stack web developer and instructor at Big Nerd Ranch. He has over 15 years of experience across a variety of tech stacks. When he’s not at work, he can be found playing tabletop games.



    [ad_2]

    Source link

  • Why foray into machine learning?

    Why foray into machine learning?

    [ad_1]

    Mac and Mobile – how it all started 

    Big Nerd Ranch has made our name in Mac and mobile. We were the first to have a comprehensive method for teaching Mac programming, iOS programming, and Android programming. Our deep knowledge of these technologies, forged through real client experience, coupled with our deep empathy for students and how they learn, has allowed us to help thousands of individuals and hundreds of companies to build their own native mobile applications. We think of ourselves as a friendly guide, helping our students solve problems one-by-one until they eventually develop a deep and comprehensive understanding of how to build a quality application that will hold up over the long term. 

    The transition to mobile involved a massive shift in capabilities and mindset. Designing and building for the small screen was new to most, so they needed comprehensive and high-qualiity training to capitalize on the opportunities afforded by this new technology. Big Nerd Ranch was able to train digital staff from across the spectrum, showing them what was possible with this new technology and giving them the confidence to build and explore more. 

    It worked. We have been able to train and guide digital companies that have gone on to become household names: Nextdoor, Facebook, just to name a couple. These companies, and many like them, leveraged our training to catapult themselves into the mobile and digital age, often seeing incredible results from their efforts. 

    Still mobile – now machine learning  

    Mobile continues to be a critical component of every company’s digital strategy, and we continue to dedicate ourselves to enabling individuals, teams, and organizations in the mobile space. Over the past few years, we have seen that machine learning and artificial intelligence have become the latest frontiers in the digital landscape. Aside from all of the recent news about AI and ML, we know they are prevalent because our clients have increasingly looked to incorporate these technologies into their digital products. Like mobile technologies, machine learning offers an entirely new set of tools with which designers, product owners, and engineers can bring their ideas to life. And like mobile technologies, there is currently a wide gulf between the promise the technology holds and the knowledge and skills organizations have to capitalize on that promise. 

    That’s why we created our Explore Machine Learning course. We want to serve a similar role as we have in the mobile space: acting as the friendly guide helping individuals, teams, and organizations to unlock the potential of machine learning for their digital products and services. We felt that we were uniquely positioned to guide teams and organizations into this new world, having done so with the last major technology wave and because we continue to keep up with new and emerging technologies. 

    Machine Learning – what you will learn 

    We built the course with three key things in mind: 1) demistify machine learning by defining key terms and explaining how the pieces fit together 2) enable students to determine if machine learning is an appropriate tool for their problem space 3) empower students to understand how to integrate machine learning solutions into their current projects. We wanted to show teams the range of machine learning approaches: some can be implemented relatively easily, others not so much. We know that there are entire advanced degree programs on this topic, so our focus was not on replacing them. We wanted to give students an accessible entry point to the technology and as we always have, show them what’s possible, and give them the confidence to explore more. 

    So whether you are a product owner or software engineer, a business leader or a designer, this course was designed to demistify the world of machine learning and help you understand what it really takes to implement a machine learning solution. Here’s what you’ll learn: 

    • The basics of machine learning and begin to understand what’s important in the world of machine learning. You learn what the ‘magic’ of machine learning is so that you can converse fluently about it. 
    • How to leverage existing machine learning solutions to solve your own product and development problems. Explore platform APIs, frameworks, and pre-built models to solve common machine learning problems. 
    • About data collection and key factors to consider, and gain experience collecting and manually labeling data. 
    • How to get started on a machine learning project. After seeing the fundamental building blocks and learning how to leverage existing systems, get a taste of what it’s like to build your own simple model. 

    If these applications of ML pique your interest, we would love to help you, your team, or your organization level up on machine learning. Reach out to us if you are interested in attending a bootcamp or want to set up a course for your team. Happy coding! 

    Eric Wilson

    Author
    Big Nerd Ranch

    Eric joined Big Nerd Ranch in the spring of 2019 to lead the training initiatives and brings a wealth of experience in both training and consultancy work.



    [ad_2]

    Source link

  • Big Nerd Ranch and blended learning: why? 

    Big Nerd Ranch and blended learning: why? 

    [ad_1]

    Connecting the dots: how we got here 

    At Big Nerd Ranch, we have some strongly held beliefs about how people best learn new technical practices. Some of the central tenets of our teaching philosophy include: 

    • Assume no prior knowledge but infinite intelligence 
    • Students learn best when they have dedicated time and space for learning 
    • Students can learn more, more quickly when we take away distractions and focus them on learning 
    • An expert guide provides the best pathway to new knowledge frontiers 
    • Students need to interact with a topic in multiple ways to most effectively learn it 
    • Adults learn best when they start with specific knowledge and then generalize from it, rather than the other way around 

    These have all been a critical part of our teaching practice for over twenty years and they will continue to be for any training we deliver. In the past, we even held some more ‘extreme’ positions like: 

    • Student resources need to be printed on paper, actual paper 
    • In-person training is always the best way to learn 
    • Training should always be done in a physical in-person setting 

    These last few are a bit tongue-in-cheek. As one must, we have evolved our beliefs as the world of learning has changed. What we’ve always cared about, more than any of the specifics, is that our students learn as much as possible as efficiently as they possibly can. We want them to have an unforgettable experience that they want to tell their friends and colleagues about. That’s how we’ve managed to stay around so long in the rapidly evolving world of technology. 

    After experiencing all of the ups and downs of the last five years, it’s probably less surprising that we would embrace a new way of training students. After all, the pandemic forced us to become a live-virtual training company overnight. Technology evolutions and our own growth have seen us branch out from our mobile core into new technologies and domains such as React, Golang, and Machine Learning. 

    Why try blended? 

    Our blended-learning experiment stems from the same culture of learning and innovation that has propelled us from our earliest days through now as a part of a global experience, design, and technology consultancy (Stellar Elements). 

    So why are we offering our SwiftUI course as a blended experience? 

    The main drivers are our students and clients. We have the opportunity to work with leading companies from all over the world. Those companies are evolving, and so are their attitudes toward learning delivery. Our clients and students have told us pretty clearly that they want to have learning delivery options that provide flexibility and that enable them to learn while continuing to meet their day-to-day responsibilities. 

     

    Also, we believe in creating unforgettable learning experiences, and blended learning allows us to accomplish that goal in a way that is authentic to who we are and what we believe about students and learning. Quite simply, human learning needs a human touch. Learning modalities such as eLearning and video have their place, but to really learn a new skillset or to make a significant leap requires a more immersive and intensive approach with the guidance of an expert mentor. Blended learning combines the best elements of both worlds to create a cohesive and effective learning environment that allows students the flexibility to do some learning at their own pace while also giving them expert access, instruction, and interaction along the way. 

    Lastly, blended learning has a capacity for scale that we are eager to embrace so that we can accommodate more students and can provide more expert access to quality technical training. In turn we can offer the course at a lower price, which makes it more accessible to students and increases its overall value for the price. 

    What can you expect from a BNR blended-learning experience? 

    We have crafted our blended experience based on a lot of research, trial and error, and learnings from our entire history. It’s geared toward giving students the best learning experience we can. 

     

     Here’s what to expect: 

    • Weekly content unlocks
      • Recorded video introducing the week and its lesson plan
      • Recorded lectures for each topic (3-5 per week)
      • Assessments for reach topic to build confidence and check progress
      • Hands-on exercises to cement and build on the lessons
    • Live Q&A sessions
      • Live sessions where instructors answer student questions and lead discusssions
      • Opportunity to submit questions ahead of time
      • Run twice per week
      • Recorded and made available to students
    • Live mob programming sessions
      • Runs once per week to organically bring together the topics from the week
      • Goals are to:
        • Reinforce topics by seeing them used in other examples
        • Provide another forum for people to ask questions
        • Touch on additional topics that might not be covered in the canonical course content, but are interesting or useful for the project getting built
      • Recorded and made available to students as additional study aids
    • Peer discussion and feedback
      • The instructor facilitates and encourages students to share their challenge solutions with each other for critique and discussion
      • Students have a shared chat space to discuss and share ideas and questions with each other as well as the instructor
      • Students join our Alumni Community for continued discussion and connection after the course

    Come join us! 

    This carefully crafted experience is geared toward helping you to understand what’s possible and giving you the confidence to explore more. I hope we’ve gotten you interested in what blended learning can offer you and/or your team. Come join us on our inaugural journey into blended learning with our upcoming SwiftUI course. It starts on September 20 (less than a month away) so register now!

    Eric Wilson

    Author
    Big Nerd Ranch

    Eric joined Big Nerd Ranch in the spring of 2019 to lead the training initiatives and brings a wealth of experience in both training and consultancy work.



    [ad_2]

    Source link