neural networks eric postma ikat universiteit maastricht
TRANSCRIPT
![Page 1: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/1.jpg)
Neural networksNeural networks
Eric PostmaEric Postma
IKATIKAT
Universiteit MaastrichtUniversiteit Maastricht
![Page 2: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/2.jpg)
OverviewOverview
Introduction: The biology of neural networks• the biological computer
• brain-inspired models
• basic notions
Interactive neural-network demonstrations• Perceptron
• Multilayer perceptron
• Kohonen’s self-organising feature map
• Examples of applications
![Page 4: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/4.jpg)
Two types of learningTwo types of learning
• Supervised learningSupervised learning– curve fitting, surface fitting, ...curve fitting, surface fitting, ...
• Unsupervised learningUnsupervised learning– clustering, visualisation...clustering, visualisation...
![Page 7: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/7.jpg)
(Artificial) neural networks(Artificial) neural networks
The digital computer The digital computer versusversus
the neural computerthe neural computer
![Page 10: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/10.jpg)
Digital versus biological computersDigital versus biological computers
5 distinguishing properties5 distinguishing properties• speedspeed• robustness robustness • flexibilityflexibility• adaptivityadaptivity• context-sensitivitycontext-sensitivity
![Page 11: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/11.jpg)
Speed: Speed: The “hundred time steps” argumentThe “hundred time steps” argument
The critical resource that is most obvious is The critical resource that is most obvious is time. Neurons whose basic computational time. Neurons whose basic computational speed is a few milliseconds must be made to speed is a few milliseconds must be made to account for complex behaviors which are account for complex behaviors which are carried out in a few hudred milliseconds carried out in a few hudred milliseconds (Posner, 1978). This means that (Posner, 1978). This means that entire complex entire complex behaviors are carried out in less than a hundred behaviors are carried out in less than a hundred time steps.time steps.
Feldman and Ballard (1982)Feldman and Ballard (1982)
![Page 15: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/15.jpg)
AdaptivitiyAdaptivitiy
processing implies learningprocessing implies learning
in biological computers in biological computers
versus versus
processing does not imply learningprocessing does not imply learning
in digital computersin digital computers
![Page 16: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/16.jpg)
Context-sensitivity: patternsContext-sensitivity: patterns
emergent propertiesemergent properties
![Page 17: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/17.jpg)
Robustness and context-sensitivityRobustness and context-sensitivitycoping with noisecoping with noise
![Page 18: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/18.jpg)
The neural computerThe neural computer
• Is it possible to develop a model after the Is it possible to develop a model after the natural example?natural example?
• Brain-inspired models:Brain-inspired models:– models based on a restricted set of structural en models based on a restricted set of structural en
functional properties of the (human) brainfunctional properties of the (human) brain
![Page 20: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/20.jpg)
Neurons, Neurons, the building blocks of the brainthe building blocks of the brain
![Page 22: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/22.jpg)
Synapses,Synapses,the basis of learning and memory the basis of learning and memory
![Page 23: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/23.jpg)
Learning:Learning: Hebb Hebb’s rule’s ruleneuron 1 synapse neuron 2
![Page 24: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/24.jpg)
ConnectivityConnectivityAn example:An example:The visual system is a The visual system is a feedforward hierarchy of feedforward hierarchy of neural modules neural modules
Every module is (to a Every module is (to a certain extent) certain extent) responsible for a certain responsible for a certain functionfunction
![Page 25: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/25.jpg)
(Artificial) (Artificial) Neural NetworksNeural Networks
• NeuronsNeurons– activityactivity– nonlinear input-output functionnonlinear input-output function
• Connections Connections – weightweight
• LearningLearning– supervisedsupervised– unsupervisedunsupervised
![Page 26: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/26.jpg)
Artificial NeuronsArtificial Neurons
• input (vectors)input (vectors)• summation (excitation)summation (excitation)• output (activation)output (activation)
a = f(e)e
i1
i2
i3
![Page 27: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/27.jpg)
Input-output functionInput-output function
• nonlinear function:nonlinear function:
e
f(e)
f(x) = 1 + e -x/a
1
a 0
a
![Page 28: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/28.jpg)
Artificial Connections Artificial Connections (Synapses)(Synapses)
• wwABAB
– The weight of the connection from neuron The weight of the connection from neuron AA to to neuron neuron BB
A BwAB
![Page 30: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/30.jpg)
Learning in the PerceptronLearning in the Perceptron• Delta learning ruleDelta learning rule
– the difference between the desired output the difference between the desired output ttand the actual output and the actual output oo, , given input given input xx
• Global error E Global error E – is a function of the differences between the is a function of the differences between the
desired and actual outputsdesired and actual outputs
![Page 33: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/33.jpg)
The history of the PerceptronThe history of the Perceptron
• Rosenblatt (1959)Rosenblatt (1959)
• Minsky & Papert (1961)Minsky & Papert (1961)
• Rumelhart & McClelland (1986)Rumelhart & McClelland (1986)
![Page 34: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/34.jpg)
The multilayer perceptronThe multilayer perceptron
input hidden output
![Page 35: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/35.jpg)
Training the MLPTraining the MLP
• supervised learningsupervised learning– each training pattern: input + desired output each training pattern: input + desired output – in each in each epochepoch: present all patterns : present all patterns – at each presentation: adapt weightsat each presentation: adapt weights– after many epochs convergence to a local minimumafter many epochs convergence to a local minimum
![Page 36: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/36.jpg)
phoneme recognition with a MLPphoneme recognition with a MLP
input: frequencies
Output:pronunciation
![Page 38: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/38.jpg)
Compression with an MLPCompression with an MLPthe the autoencoderautoencoder
![Page 41: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/41.jpg)
Preventing OverfittingPreventing Overfitting
GENERALISATION GENERALISATION = performance on test set= performance on test set
• Early stoppingEarly stopping• Training, Test, and Validation setTraining, Test, and Validation set• kk-fold cross validation-fold cross validation
– leaving-one-out procedureleaving-one-out procedure
![Page 45: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/45.jpg)
Other ApplicationsOther Applications
• PracticalPractical– OCROCR– financial time seriesfinancial time series– fraud detectionfraud detection– process controlprocess control– marketingmarketing– speech recognitionspeech recognition
• TheoreticalTheoretical– cognitive modelingcognitive modeling– biological modelingbiological modeling
![Page 48: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/48.jpg)
Derivation of the delta learning ruleDerivation of the delta learning rule
Target output
Actual output
h = i
![Page 50: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/50.jpg)
Sigmoid functionSigmoid function
• May also be theMay also be the tanhtanh functionfunction – (<-1,+1> (<-1,+1> instead of instead of <0,1>)<0,1>)
• DerivativeDerivative f’(x) = f(x) [1 – f(x)] f’(x) = f(x) [1 – f(x)]
![Page 51: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/51.jpg)
Derivation generalized delta ruleDerivation generalized delta rule
![Page 53: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/53.jpg)
AdaptationAdaptation hidden-output hidden-output weightsweights
![Page 54: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/54.jpg)
AAdaptationdaptation input-hidden input-hidden weightsweights
![Page 55: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/55.jpg)
Forward Forward andand Backward Propagation Backward Propagation
![Page 56: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/56.jpg)
Decision boundaries of PerceptronsDecision boundaries of Perceptrons
Straight lines (surfaces), linear separable
![Page 57: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/57.jpg)
Decision boundaries of MLPsDecision boundaries of MLPs
Convex areas (open or closed)
![Page 58: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/58.jpg)
Decision boundaries of MLPs Decision boundaries of MLPs
Combinations of convex areas
![Page 59: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/59.jpg)
Learning and representing Learning and representing similaritysimilarity
![Page 60: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/60.jpg)
Alternative conception of neuronsAlternative conception of neurons
• Neurons do not take the weighted sum of their Neurons do not take the weighted sum of their inputs (as in the perceptron), but measure the inputs (as in the perceptron), but measure the similarity of the weight vector to the input similarity of the weight vector to the input vectorvector
• The activation of the neuron is a measure of The activation of the neuron is a measure of similarity. The more similar the weight is to the similarity. The more similar the weight is to the input, the higher the activationinput, the higher the activation
• Neurons represent “prototypes”Neurons represent “prototypes”
![Page 61: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/61.jpg)
Course CodingCourse Coding
![Page 62: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/62.jpg)
22nd ordernd order isomor isomorphismphism
![Page 63: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/63.jpg)
Prototypes forPrototypes for preprocessing preprocessing
![Page 64: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/64.jpg)
Kohonen’s SOFMKohonen’s SOFM(Self Organizing Feature Map)(Self Organizing Feature Map)
• Unsupervised learningUnsupervised learning• Competitive learningCompetitive learning
output
input (n-dimensional)
winner
![Page 65: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/65.jpg)
Competitive learningCompetitive learning
• Determine the winner (the neuron of which Determine the winner (the neuron of which the weight vector has the smallest distance the weight vector has the smallest distance to the input vector)to the input vector)
• Move the weight vector Move the weight vector ww of the winning of the winning neuron towards the input neuron towards the input ii
Before learning
i
w
After learning
i w
![Page 66: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/66.jpg)
Kohonen’s ideaKohonen’s idea
• Impose a topological order onto the Impose a topological order onto the competitive neurons (e.g., rectangular map)competitive neurons (e.g., rectangular map)
• Let neighbours of the winner share the Let neighbours of the winner share the “prize” (The “postcode lottery” principle.)“prize” (The “postcode lottery” principle.)
• After learning, neurons with similar weights After learning, neurons with similar weights tend to cluster on the maptend to cluster on the map
![Page 67: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/67.jpg)
Topological orderTopological order
neighbourhoodsneighbourhoods• SquareSquare
– winner (red)winner (red)– Nearest neighboursNearest neighbours
• HexagonalHexagonal– Winner (red)Winner (red)– Nearest neighboursNearest neighbours
![Page 68: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/68.jpg)
A simple exampleA simple example
• A topological map of 2 x 3 neurons A topological map of 2 x 3 neurons and two inputsand two inputs
2D input
input
weights
visualisation
![Page 69: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/69.jpg)
Weights before trainingWeights before training
![Page 70: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/70.jpg)
Input patterns Input patterns (note the 2D distribution)(note the 2D distribution)
![Page 71: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/71.jpg)
Weights after trainingWeights after training
![Page 72: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/72.jpg)
Another exampleAnother example
• Input: uniformly randomly distributed pointsInput: uniformly randomly distributed points
• Output: Map of 20Output: Map of 2022 neurons neurons
• TrainingTraining– Starting with a large learning rate and Starting with a large learning rate and
neighbourhood size, both are gradually decreased neighbourhood size, both are gradually decreased to facilitate convergenceto facilitate convergence
![Page 73: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/73.jpg)
![Page 74: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/74.jpg)
Dimension reductionDimension reduction
![Page 75: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/75.jpg)
Adaptive resolutionAdaptive resolution
![Page 76: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/76.jpg)
Application of SOFMApplication of SOFM
Examples (input) SOFM after training (output)
![Page 77: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/77.jpg)
Visual features (biologically plausible)Visual features (biologically plausible)
![Page 78: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/78.jpg)
• Principal Components Analysis (PCA)Principal Components Analysis (PCA)
pca1pca2
pca1
pca2
Projections of data
Relation with statistical methods 1Relation with statistical methods 1
![Page 79: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/79.jpg)
Relation with statistical methods 2Relation with statistical methods 2• Multi-Dimensional Scaling (MDS)Multi-Dimensional Scaling (MDS)• Sammon MappingSammon Mapping
Distances in high-dimensional space
![Page 81: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/81.jpg)
Fractal dimension in artFractal dimension in art
Jackson Pollock (Jack the Dripper)
![Page 82: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/82.jpg)
Taylor, Micolich, and Jonas (1999). Fractal Analysis of Pollock’s drip Taylor, Micolich, and Jonas (1999). Fractal Analysis of Pollock’s drip paintings. paintings. NatureNature, 399, 422. (3 june)., 399, 422. (3 june).
Creation date
Fra
cta
l d
imen
sio
n
} Range for natural images
![Page 83: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/83.jpg)
Our Van Gogh researchOur Van Gogh research
Two paintersTwo painters
• Vincent Van GoghVincent Van Gogh paints Van Gogh paints Van Gogh
• Claude-Emile SchuffeneckerClaude-Emile Schuffenecker paints Van Gogh paints Van Gogh
![Page 84: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/84.jpg)
SunflowersSunflowers• Is it made byIs it made by
– Van Gogh?Van Gogh?
– Schuffenecker?Schuffenecker?
![Page 85: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/85.jpg)
ApproachApproach
• Select appropriate features (skipped here, but Select appropriate features (skipped here, but very important!)very important!)
• Apply neural networksApply neural networks
![Page 87: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/87.jpg)
Training DataTraining Data
Van Gogh (5000 textures)Van Gogh (5000 textures) SchuffeneckerSchuffenecker (5000 textures)(5000 textures)
![Page 88: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/88.jpg)
ResultsResults
• Generalisation performanceGeneralisation performance
• 96% correct classification on untrained data96% correct classification on untrained data
![Page 89: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/89.jpg)
Resultats, cont.Resultats, cont.
• Trained art-expert Trained art-expert network applied to network applied to Yasuda sunflowersYasuda sunflowers
• 89% of the textures is 89% of the textures is geclassificeerd as a geclassificeerd as a genuine Van Goghgenuine Van Gogh
![Page 90: Neural networks Eric Postma IKAT Universiteit Maastricht](https://reader035.vdocuments.us/reader035/viewer/2022062511/551a8bea550346761a8b55ac/html5/thumbnails/90.jpg)
A major caveat…A major caveat…
• Not only the painters are Not only the painters are different…different…
• ……but also the materialbut also the material
and maybe many other things…and maybe many other things…