createMLPNetwork - Create and initialize a Multi-Layer Perceptron (MLP) network to be used within a neural state-space system - MATLAB - MathWorks 日本 (2024)

Create and initialize a Multi-Layer Perceptron (MLP) network to be used within a neural state-space system

Since R2022b

collapse all in page

    Syntax

    dlnet = createMLPNetwork(nss,type)

    dlnet = createMLPNetwork(___,Name=Value)

    Description

    dlnet = createMLPNetwork(nss,type) creates a multi-layer perceptron (MLP) network dlnet of type type to approximate either the state, (the non-trivial part of) the output, the encoder, or the decoder function of the neural state space object nss. For example, to specify the network for the state function, use

    nss.StateNetwork = createMLPNetwork(nss,"state",...)

    To specify the network for the non-trivial part of the output function, use

    nss.OutputNetwork(2) = createMLPNetwork(nss,"output",...)

    To specify the encoder network configuration, use

    nss.Encoder = createMLPNetwork(nss,"encoder",...)

    To specify the decoder network configuration, use

    nss.Decoder = createMLPNetwork(nss,"decoder",...)

    example

    dlnet = createMLPNetwork(___,Name=Value) specifies name-value pair arguments after any of the input argument in the previous syntax. You can use name-value pair arguments to set the number of layers, the number of neurons per layer, or the type of their activation function.

    For example, dlnet = createMLPNetwork(nss,"output",LayerSizes=[4 3],Activations="sigmoid") creates an output network with two hidden layers having four and three sigmoid-activated neurons, respectively.

    Examples

    collapse all

    Open Live Script

    Use idNeuralStateSpace to create a continuous-time neural state-space object with three states and one input. By default, the state network has two hidden layers each with 64 neurons and a hyperbolic tangent activation function.

    nss = idNeuralStateSpace(3,NumInputs=1)
    nss =Continuous-time Neural ODE in 3 variables dx/dt = f(x(t),u(t)) y(t) = x(t) + e(t) f(.) network: Deep network with 2 fully connected, hidden layers Activation function: tanh Variables: x1, x2, x3 Status: Created by direct construction or transformation. Not estimated.

    Use createMLPNetwork and dot notation, to re-configure the state network. Specify three hidden layers of 4, 8 and 4 neurons, respectively, and use sigmoid as the activation function.

    nss.StateNetwork = createMLPNetwork(nss,"state", ... LayerSizes=[4 8 4],Activations="sigmoid");

    You can now use time-domain data to perform estimation and validation.

    Input Arguments

    collapse all

    nssNeural state-space object
    idNeuralStateSpace object

    Neural state-space object, specified as an idNeuralStateSpace object.

    Example: idNeuralStateSpace(2,NumInputs=1)

    typeNetwork type
    "state" | "output" | "encoder" | "decoder"

    Network type, specified as one of the following:

    • "state" — creates a network to approximate the state function of nss. For continuous state-space systems the state function returns the system state derivative with respect to time, while for discrete-time state-space systems it returns the next state. The inputs of the state function are time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    • "output" — creates a network to approximate the non-trivial part of the output function of nss. This network returns the non-trivial system output, y2(t) = H(t,x,u), as a function of time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive). For more information, see idNeuralStateSpace.

    • "encoder" — creates a network to approximate the encoder function. The encoder maps the state to a latent state (usually, of a lower dimension), which is the input to the state function network. For more information, see idNeuralStateSpace.

    • "decoder" — creates a network to approximate the decoder function. The output of the state function network is the input of the decoder. The decoder maps the latent state back to the original state. For more information, see idNeuralStateSpace.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: LayerSizes=[16 32 16]

    Use name-value pair arguments to specify network properties such as the number of hidden layers, the size of each hidden layer, the activation functions, and the weights and bias initialization methods.

    LayerSizesLayer sizes
    [64 64] (default) | vector of positive integers

    Layer sizes, specified as a vector of positive integers. Each number specifies the number of neurons (network nodes) for each hidden layer (each layer is fully-connected). For example, [10 20 8] specifies a network with three hidden layers, the first (after the network input) having 10 neurons, the second having 20 neurons, and the last (before the network output), having 8 neurons. Note that the output layer is also fully-connected, and you cannot change its size.

    ActivationsActivation function type
    "tanh" (default) | "sigmoid" | "relu" | "leakyRelu" | "clippedRelu" | "elu" | "gelu" | "swish" | "softplus" | "scaling" | "softmax" | "none"

    Activation function type for all hidden layers, specified as one of the following: "tanh", "sigmoid", "relu", "leakyRelu", "clippedRelu", "elu", "gelu", "swish", "softplus", "scaling", or "softmax". All of these are available in Deep Learning Toolbox™. "softplus" and "scaling" also require Reinforcement Learning Toolbox™.

    You can specify hyperparameter values for "leakyRelu", "clippedRelu", "elu", and "scaling". For example:

    • "leakyRelu(0.2)" specifies a leaky ReLU activation layer with a scaling value of 0.2.

    • "clippedRelu(5)" specifies a clipped ReLU activation layer with a ceiling value of 5.

    • "elu(2)" specifies an ELU activation layer with the Alpha property equal to 2.

    • "scaling(0.2,4)" specifies a scaling activation layer with a scale of 0.2 and a bias of 4.

    Also, you can now choose to not use an activation function by specifying the activation function as "none".

    For more information about these activations, see the Activation Layers and Utility Layers sections in List of Deep Learning Layers (Deep Learning Toolbox).

    WeightsInitializerWeights initializer method
    "glorot" (default) | "he" | "orthogonal" | "narrow-normal" | "zeros" | "ones"

    Weights initializer method for all hidden layers, specified as one of the following:

    • "glorot" — uses the Glorot method.

    • "he" — uses the He method.

    • "orthogonal" — uses the orthogonal method.

    • "narrow-normal" — uses the narrow-normal method.

    • "zeros" — initializes all weights to zero.

    • "ones" — initializes all weights to one.

    BiasInitializerBias initializer method
    "zeros" (default) | "ones" | "narrow-normal"

    Bias initializer method for all hidden layers, specified as one of the following:

    • "narrow-normal" — uses the narrow-normal method.

    • "zeros" — initializes all biases to zero.

    • "ones" — initializes all biases to one.

    Output Arguments

    collapse all

    dlnet — Network for the state or output function
    dlnetwork object

    Network for the state or output function of nss, specified as a dlnetwork (Deep Learning Toolbox) object.

    For continuous state-space systems the state function returns the system state derivative with respect to time, while for discrete-time state-space systems it returns the next state. The inputs of the state function are time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    The output function returns the system output as a function of time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    Note

    You can use commands such as summary(dlnet), plot(dlnet), dlnet.Layers, and dlnet.Learnables to examine network details.

    Version History

    Introduced in R2022b

    See Also

    Objects

    • idNeuralStateSpace | nssTrainingADAM | nssTrainingSGDM | nssTrainingRMSProp | nssTrainingLBFGS | idss | idnlgrey

    Functions

    • setNetwork | nssTrainingOptions | nlssest | generateMATLABFunction | idNeuralStateSpace/evaluate | idNeuralStateSpace/linearize | sim

    Live Editor Tasks

    • Estimate Neural State-Space Model

    Blocks

    • Neural State-Space Model

    Topics

    • What are Neural State-Space Models?
    • Estimate Neural State-Space System
    • Estimate Nonlinear Autonomous Neural State-Space System
    • Neural State-Space Model of Simple Pendulum System
    • Reduced Order Modeling of a Nonlinear Dynamical System using Neural State-Space Model with Autoencoder

    MATLAB コマンド

    次の MATLAB コマンドに対応するリンクがクリックされました。

     

    コマンドを MATLAB コマンド ウィンドウに入力して実行してください。Web ブラウザーは MATLAB コマンドをサポートしていません。

    createMLPNetwork - Create and initialize a Multi-Layer Perceptron (MLP) network to be used within aneural state-space system - MATLAB- MathWorks 日本 (1)

    Select a Web Site

    Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

    You can also select a web site from the following list:

    Americas

    • América Latina (Español)
    • Canada (English)
    • United States (English)

    Europe

    • Belgium (English)
    • Denmark (English)
    • Deutschland (Deutsch)
    • España (Español)
    • Finland (English)
    • France (Français)
    • Ireland (English)
    • Italia (Italiano)
    • Luxembourg (English)
    • Netherlands (English)
    • Norway (English)
    • Österreich (Deutsch)
    • Portugal (English)
    • Sweden (English)
    • Switzerland
      • Deutsch
      • English
      • Français
    • United Kingdom (English)

    Asia Pacific

    • Australia (English)
    • India (English)
    • New Zealand (English)
    • 中国
    • 日本 (日本語)
    • 한국 (한국어)

    Contact your local office

    createMLPNetwork - Create and initialize a Multi-Layer Perceptron (MLP) network to be used within a
neural state-space system - MATLAB
- MathWorks 日本 (2024)
    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Dean Jakubowski Ret

    Last Updated:

    Views: 6139

    Rating: 5 / 5 (50 voted)

    Reviews: 89% of readers found this page helpful

    Author information

    Name: Dean Jakubowski Ret

    Birthday: 1996-05-10

    Address: Apt. 425 4346 Santiago Islands, Shariside, AK 38830-1874

    Phone: +96313309894162

    Job: Legacy Sales Designer

    Hobby: Baseball, Wood carving, Candle making, Jigsaw puzzles, Lacemaking, Parkour, Drawing

    Introduction: My name is Dean Jakubowski Ret, I am a enthusiastic, friendly, homely, handsome, zealous, brainy, elegant person who loves writing and wants to share my knowledge and understanding with you.