Lesson 2

Welcome to another lesson! This time we continue our exciting journey into the realm of building flexible Neural Networks with **TensorFlow**. In this lesson, we'll delve into the creation of a model-building function using TensorFlow, an essential tool for customizable models. Our primary goal in this lesson is to help you understand and implement a function that can construct neural network models according to your specifications. By the end of this lesson, you will be equipped with the knowledge to create versatile and scalable models, enabling you to rapidly prototype and experiment with different neural network architectures with ease.

Experimenting with building neural network models in **TensorFlow** is an excellent way to learn and gain experience. However, if you find yourself constructing multiple models with different parameters and architectures, it can become repetitive and cluttered. You might be copying and pasting boilerplate code or constantly modifying parameters manually. In these scenarios, it can be very useful to create a function that takes your specifications as inputs and returns an appropriate neural network model.

In Python, and specifically in **TensorFlow**, creating such a function is straightforward. This approach leverages abstraction—a fundamental idea in computer science that allows us to hide complexity and make things reusable.

A model-building function not only streamlines the code but also enhances readability. Moreover, it provides a flexible and scalable way to generate different models based on varying requirements. Such a function can be particularly useful in scenarios where you need to:

- Rapidly prototype different neural network architectures.
- Maintain cleaner and more maintainable code with less repetition.
- Easily experiment with various configurations and parameters to find the best performing model.

By encapsulating the model creation process within a function, you make your workflow more efficient and less error-prone. Let's dive deeper and see how this works.

We're going to write a Python function, `create_model`

, which builds a neural network model according to our requirements. The function will allow us to specify the number of input dimensions, the number of neurons in the hidden layer, and the activation function for the output layer.

Let's break down the code:

Python`1import tensorflow as tf 2 3def create_model(input_shape=(2,), num_neurons=10, output_activation='sigmoid'): 4 model = tf.keras.Sequential([ 5 tf.keras.layers.Input(shape=input_shape), 6 tf.keras.layers.Dense(num_neurons, activation='relu'), 7 tf.keras.layers.Dense(1, activation=output_activation) 8 ]) 9 return model`

Above, our `create_model`

function is defined with three parameters:

`input_shape`

: Specifies the shape of the input data. It's a tuple, defaulting to`(2,)`

, which signifies a 2D input.`num_neurons`

: The number of neurons in the dense hidden layer. It defaults to`10`

.`output_activation`

: The activation function used in the output layer. By default, it is`sigmoid`

.

We then initiate a `tf.keras.Sequential`

model and add an input layer and two dense layers. The first dense layer uses a `relu`

activation function and makes up our hidden layer, while the second dense layer serves as our output layer. The activation function of the output layer is specified by `output_activation`

.

Finally, we return our created model.

This construct allows us to create a customizable neural network model flexibly. We can freely change the parameters for the input shapes, the neurons in the hidden layer, and the activation function for the output layer.

Now that we have our function, let's see it in action:

Python`1model_1 = create_model(input_shape=(2,), num_neurons=10, output_activation='sigmoid') 2model_2 = create_model(input_shape=(2,), num_neurons=20, output_activation='sigmoid') 3 4print("Model 1 Architecture:") 5model_1.summary() 6 7print("\nModel 2 Architecture:") 8model_2.summary()`

In the above example, we create two different models, `model_1`

and `model_2`

. Both models have an input shape of `(2,)`

, but `model_1`

has `10`

neurons in its hidden layer, and `model_2`

has `20`

neurons. Both models use the `sigmoid`

activation function in their output layers.

Our function quickly constructed two different neural network models with different architectures. We can easily compare these models or use them for different tasks, all while keeping our code neat and manageable.

Let's examine the models' architectures displayed by the above code:

Plain text`1Model 1 Architecture: 2Model: "sequential" 3┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ 4┃ Layer (type) ┃ Output Shape ┃ Param # ┃ 5┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ 6│ dense (Dense) │ (None, 10) │ 30 │ 7├─────────────────────────────────┼────────────────────────┼───────────────┤ 8│ dense_1 (Dense) │ (None, 1) │ 11 │ 9└─────────────────────────────────┴────────────────────────┴───────────────┘ 10 Total params: 41 (164.00 B) 11 Trainable params: 41 (164.00 B) 12 Non-trainable params: 0 (0.00 B) 13 14Model 2 Architecture: 15Model: "sequential_1" 16┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ 17┃ Layer (type) ┃ Output Shape ┃ Param # ┃ 18┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ 19│ dense_2 (Dense) │ (None, 20) │ 60 │ 20├─────────────────────────────────┼────────────────────────┼───────────────┤ 21│ dense_3 (Dense) │ (None, 1) │ 21 │ 22└─────────────────────────────────┴────────────────────────┴───────────────┘ 23 Total params: 81 (324.00 B) 24 Trainable params: 81 (324.00 B) 25 Non-trainable params: 0 (0.00 B)`

The provided output summaries display the architecture of the two models we built. It showcases the type of layers, the shape of the output after each layer, and the number of trainable parameters. This summary view is very useful for understanding the model's structure and parameters at a glance.

Congratulations! You've learned to create a function that helps build **TensorFlow** models more efficiently. We've leveraged Python's power to create a reusable and flexible function, which can significantly streamline our neural network model creations.

In our upcoming exercises, try using this model-building function to generate various models with different parameters. Experiment with altering the input shapes, the neurons in the hidden layer, and the activation function for the output layer. Through practice, you'll deepen your understanding and become proficient in creating versatile **TensorFlow** models.

Remember, understanding **TensorFlow** and its capabilities is fundamental for any modern Machine Learning Engineer. So embrace the challenge, dive into the code, and happy modeling!