成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron

COMP9414代做、代寫Java/c++編程語言

時間:2024-06-15  來源:  作者: 我要糾錯



COMP9414 24T2
Artificial Intelligence
Assignment 1 - Artificial neural networks
Due: Week 5, Wednesday, 26 June 2024, 11:55 PM.
1 Problem context
Time Series Air Quality Prediction with Neural Networks: In this
assignment, you will delve into the realm of time series prediction using neural
network architectures. You will explore both classification and estimation
tasks using a publicly available dataset.
You will be provided with a dataset named “Air Quality,” [1] available
on the UCI Machine Learning Repository 1. We tailored this dataset for this
assignment and made some modifications. Therefore, please only use the
attached dataset for this assignment.
The given dataset contains 8,358 instances of hourly averaged responses
from an array of five metal oxide chemical sensors embedded in an air qual-
ity chemical multisensor device. The device was located in the field in a
significantly polluted area at road level within an Italian city. Data were
recorded from March 2004 to February 2005 (one year), representing the
longest freely available recordings of on-field deployed air quality chemical
sensor device responses. Ground truth hourly averaged concentrations for
carbon monoxide, non-methane hydrocarbons, benzene, total nitrogen ox-
ides, and nitrogen dioxide among other variables were provided by a co-
located reference-certified analyser. The variables included in the dataset
1https://archive.ics.uci.edu/dataset/360/air+quality
1
are listed in Table 1. Missing values within the dataset are tagged
with -200 value.
Table 1: Variables within the dataset.
Variable Meaning
CO(GT) True hourly averaged concentration of carbon monoxide
PT08.S1(CO) Hourly averaged sensor response
NMHC(GT) True hourly averaged overall Non Metanic HydroCar-
bons concentration
C6H6(GT) True hourly averaged Benzene concentration
PT08.S2(NMHC) Hourly averaged sensor response
NOx(GT) True hourly averaged NOx concentration
PT08.S3(NOx) Hourly averaged sensor response
NO2(GT) True hourly averaged NO2 concentration
PT08.S4(NO2) Hourly averaged sensor response
PT08.S5(O3) Hourly averaged sensor response
T Temperature
RH Relative Humidity
AH Absolute Humidity
2 Activities
This assignment focuses on two main objectives:
? Classification Task: You should develop a neural network that can
predict whether the concentration of Carbon Monoxide (CO) exceeds
a certain threshold – the mean of CO(GT) values – based on historical
air quality data. This task involves binary classification, where your
model learns to classify instances into two categories: above or below
the threshold. To determine the threshold, you must first calculate
the mean value for CO(GT), excluding unknown data (missing values).
Then, use this threshold to predict whether the value predicted by your
network is above or below it. You are free to choose and design your
own network, and there are no limitations on its structure. However,
your network should be capable of handling missing values.
2
? Regression Task: You should develop a neural network that can pre-
dict the concentration of Nitrogen Oxides (NOx) based on other air
quality features. This task involves estimating a continuous numeri-
cal value (NOx concentration) from the input features using regression
techniques. You are free to choose and design your own network and
there is no limitation on that, however, your model should be able to
deal with missing values.
In summary, the classification task aims to divide instances into two cat-
egories (exceeding or not exceeding CO(GT) threshold), while the regression
task aims to predict a continuous numerical value (NOx concentration).
2.1 Data preprocessing
It is expected you analyse the provided data and perform any required pre-
processing. Some of the tasks during preprocessing might include the ones
shown below; however, not all of them are necessary and you should evaluate
each of them against the results obtained.
(a) Identify variation range for input and output variables.
(b) Plot each variable to observe the overall behaviour of the process.
(c) In case outliers or missing data are detected correct the data accord-
ingly.
(d) Split the data for training and testing.
2.2 Design of the neural network
You should select and design neural architectures for addressing both the
classification and regression problem described above. In each case, consider
the following steps:
(a) Design the network and decide the number of layers, units, and their
respective activation functions.
(b) Remember it’s recommended your network accomplish the maximal
number of parameters Nw < (number of samples)/10.
(c) Create the neural network using Keras and TensorFlow.
3
2.3 Training
In this section, you have to train your proposed neural network. Consider
the following steps:
(a) Decide the training parameters such as loss function, optimizer, batch
size, learning rate, and episodes.
(b) Train the neural model and verify the loss values during the process.
(c) Verify possible overfitting problems.
2.4 Validating the neural model
Assess your results plotting training results and the network response for the
test inputs against the test targets. Compute error indexes to complement
the visual analysis.
(a) For the classification task, draw two different plots to illustrate your
results over different epochs. In the first plot, show the training and
validation loss over the epochs. In the second plot, show the training
and validation accuracy over the epochs. For example, Figure 1 and
Figure 2 show loss and classification accuracy plots for 100 epochs,
respectively.
Figure 1: Loss plot for the classifica-
tion task
Figure 2: Accuracy plot for the clas-
sification task
4
(b) For the classification task, compute a confusion matrix 2 including True
Positive (TP), True Negative (TN), False Positive (FP), and False Neg-
ative (FN), as shown in Table 2. Moreover, report accuracy and pre-
cision for your test data and mention the number of tested samples as
shown in Table 3 (the numbers shown in both tables are randomly cho-
sen and may not be consistent with each other). For instance, Sklearn
library offers a various range of metric functions 3, including confusion
matrix 4, accuracy, and precision. You can use Sklearn in-built met-
ric functions to calculate the mentioned metrics or develop your own
functions.
Table 2: Confusion matrix for the test data for the classification task.
Confusion Matrix Positive (Actual) Negative (Actual)
Positive (Predicted) 103 6
Negative (Predicted) 6 75
Table 3: Accuracy and precision for the test data for the classification task.
Accuracy Precision Number of Samples
CO(GT) classification 63% 60% 190
(c) For the regression task, draw two different plots to illustrate your re-
sults. In the first plot, show how the selected loss function varies for
both the training and validation through the epochs. In the second
plot, show the final estimation results for the validation test. For in-
stance, Figure 3 and Figure 4 show the loss function and the network
outputs vs the actual NOx(GT) values for a validation test, respec-
tively. In Figure 4 no data preprocessing has been performed, however,
as mentioned above, it is expected you include this in your assignment.
(d) For the regression task, report performance indexes including the Root
Mean Squared Error (RMSE), Mean Absolute Error (MAE) (see a
discussion on [2]), and the number of samples for your estimation of
2https://en.wikipedia.org/wiki/Confusion matrix
3https://scikit-learn.org/stable/api/sklearn.metrics.html
4https://scikitlearn.org/stable/modules/generated/sklearn.metrics.confusion matrix.html
5
Figure 3: Loss plot for the re-
gression task.
Figure 4: Estimated and actual NOx(GT)
for the validation set.
NOx(GT) values in a table. Root Mean Squared Error (RMSE) mea-
sures the differences between the observed values and predicted ones
and is defined as follows:
RMSE =

1
n
Σi=ni=1 (Yi ? Y?i)2, (1)
where n is the number of our samples, Yi is the actual label and Y?i
is the predicted value. In the same way, MAE can be defined as the
absolute average of errors as follows:
MAE =
1
n
Σi=ni=1 |Yi ? Y?i|. (2)
Table 4 shows an example of the performance indexes (all numbers are
randomly chosen and may not be consistent with each other). As men-
tioned before, Sklearn library offers a various range of metric functions,
including RMSE5 and MAE 6. You can use Sklearn in-built metric func-
tions to calculate the mentioned metrics or develop your own functions.
Table 4: Result table for the test data for the regression task.
RMSE MAE Number of Samples
90.60 50.35 55
5https://scikit-learn.org/stable/modules/generated/sklearn.metrics.root mean squared error.html
6https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean absolute error.html
6
3 Testing and discussing your code
As part of the assignment evaluation, your code will be tested by tutors along
with you in a discussion session carried out in the tutorial session in week 6.
The assignment has a total of 25 marks. The discussion is mandatory and,
therefore, we will not mark any assignment not discussed with tutors.
You are expected to propose and build neural models for classification
and regression tasks. The minimal output we expect to see are the results
mentioned above in Section 2.4. You will receive marks for each of these
subsection as shown in Table 5, i.e. 7 marks in total. However, it’s fine if
you want to include any other outcome to highlight particular aspects when
testing and discussing your code with your tutor.
For marking your results, you should be prepared to simulate your neural
model with a generalisation set we have saved apart for that purpose. You
must anticipate this by including in your submission a script ready to open
a file (with the same characteristics as the given dataset but with fewer data
points), simulate the network, and perform all the validation tests described
in Section 2.4 (b) and (d) (accuracy, precision, RMSE, MAE). It is recom-
mended to save all of your hyper-parameters and weights (your model in
general) so you can call your network and perform the analysis later in your
discussion session.
As for the classification task, you need to compute accuracy and precision,
while for the regression task RMSE and MAE using the generalisation set.
You will receive 3 marks for each task, given successful results. Expected
results should be as follows:
? For the classification task, your network should achieve at least 85%
accuracy and precision. Accuracy and precision lower than that will
result in a score of 0 marks for that specific section.
? For the regression task, it is expected to achieve an RMSE of at least
280 and an MAE of 220 for unseen data points. Errors higher than the
mentioned values will be marked as 0 marks.
Finally, you will receive 1 mark for code readability for each task, and
your tutor will also give you a maximum of 5 marks for each task depending
on the level of code understanding as follows: 5. Outstanding, 4. Great,
3. Fair, 2. Low, 1. Deficient, 0. No answer.
7
Table 5: Marks for each task.
Task Marks
Results obtained with given dataset
Loss and accuracy plots for classification task 2 marks
Confusion matrix and accuracy and precision tables for classifi-
cation task
2 marks
Loss and estimated NOx(GT) plots for regression task 2 marks
Performance indexes table for regression task 1 mark
Results obtained with generalisation dataset
Accuracy and precision for classification task 3 marks
RMSE and MAE for regression task 3 marks
Code understanding and discussion
Code readability for classification task 1 mark
Code readability for regression task 1 mark
Code understanding and discussion for classification task 5 mark
Code understanding and discussion for regression task 5 mark
Total marks 25 marks
4 Submitting your assignment
The assignment must be done individually. You must submit your assignment
solution by Moodle. This will consist of a single .ipynb Jupyter file. This file
should contain all the necessary code for reading files, data preprocessing,
network architecture, and result evaluations. Additionally, your file should
include short text descriptions to help markers better understand your code.
Please be mindful that providing clean and easy-to-read code is a part of
your assignment.
Please indicate your full name and your zID at the top of the file as a
comment. You can submit as many times as you like before the deadline –
later submissions overwrite earlier ones. After submitting your file a good
practice is to take a screenshot of it for future reference.
Late submission penalty: UNSW has a standard late submission
penalty of 5% per day from your mark, capped at five days from the as-
sessment deadline, after that students cannot submit the assignment.
8
5 Deadline and questions
Deadline: Week 5, Wednesday 26 June of June 2024, 11:55pm. Please
use the forum on Moodle to ask questions related to the project. We will
prioritise questions asked in the forum. However, you should not share your
code to avoid making it public and possible plagiarism. If that’s the case,
use the course email cs9414@cse.unsw.edu.au as alternative.
Although we try to answer questions as quickly as possible, we might take
up to 1 or 2 business days to reply, therefore, last-moment questions might
not be answered timely.
6 Plagiarism policy
Your program must be entirely your own work. Plagiarism detection software
might be used to compare submissions pairwise (including submissions for
any similar projects from previous years) and serious penalties will be applied,
particularly in the case of repeat offences.
Do not copy from others. Do not allow anyone to see your code.
Please refer to the UNSW Policy on Academic Honesty and Plagiarism if you
require further clarification on this matter.
References
[1] De Vito, S., Massera, E., Piga, M., Martinotto, L. and Di Francia, G.,
2008. On field calibration of an electronic nose for benzene estimation in an
urban pollution monitoring scenario. Sensors and Actuators B: Chemical,
129(2), pp.750-757.
[2] Hodson, T. O. 2022. Root mean square error (RMSE) or mean absolute
error (MAE): When to use them or not. Geoscientific Model Development
Discussions, 2022, 1-10.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp







 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:代做COMP9021、代寫python語言編程
  • 下一篇:代寫指標選股公式 代做公式指標 通倚天劍戰法
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網下載 歐冠直播 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron
    亚洲免费观看高清| 午夜免费欧美电影| 亚洲天堂av老司机| 午夜精品视频在线观看| 国内欧美视频一区二区| 色菇凉天天综合网| 国产欧美日韩卡一| 偷偷要91色婷婷| 国产91丝袜在线播放九色| 亚洲自拍与偷拍| 国产精品一区在线观看你懂的| 欧美日韩国产综合一区二区三区| 亚洲国产高清不卡| 美女视频黄久久| 欧美视频一区二区| 亚洲伦理在线免费看| 国产成人自拍高清视频在线免费播放| 91精品国产免费| 亚洲第一主播视频| 色女孩综合影院| 中文字幕不卡一区| 国产毛片精品国产一区二区三区| 91精品久久久久久久久99蜜臂| 亚洲一区二区三区三| 91一区一区三区| 中文乱码免费一区二区| 久久国产尿小便嘘嘘尿| 6080国产精品一区二区| 亚洲午夜久久久久中文字幕久| 91视频免费观看| 国产精品初高中害羞小美女文| 极品少妇xxxx偷拍精品少妇| 日韩免费视频一区二区| 日本sm残虐另类| 7777精品久久久大香线蕉| 亚欧色一区w666天堂| 色婷婷av一区二区三区之一色屋| 最好看的中文字幕久久| 99久久精品费精品国产一区二区| 国产精品盗摄一区二区三区| 亚洲特级片在线| 亚洲午夜国产一区99re久久| 99r国产精品| 亚洲私人黄色宅男| 色综合久久久久久久久| 亚洲欧美色一区| 一本一道波多野结衣一区二区 | 国产欧美日韩三区| 国产不卡在线播放| 国产色产综合色产在线视频| 高清成人免费视频| 国产精品国产三级国产专播品爱网 | 视频一区在线视频| 欧美猛男超大videosgay| 午夜久久久影院| 日韩三区在线观看| 国产一区二区免费看| 国产精品久久久久久久裸模 | 国产精品免费网站在线观看| 99免费精品在线| 亚洲一区二区综合| 正在播放一区二区| 国产在线一区观看| 国产精品国产三级国产aⅴ中文| 一本大道久久a久久综合婷婷| 亚洲丰满少妇videoshd| 日韩一区二区三区三四区视频在线观看 | 欧美视频中文一区二区三区在线观看| 一区二区三区四区激情| 欧美人牲a欧美精品| 看电视剧不卡顿的网站| 国产亚洲精久久久久久| 色综合久久久久久久久| 青娱乐精品在线视频| 久久久国产精品午夜一区ai换脸| 99热这里都是精品| 三级欧美在线一区| 日本一区二区免费在线| 色国产精品一区在线观看| 蜜臀av性久久久久av蜜臀妖精| 国产午夜一区二区三区| 色天使久久综合网天天| 乱一区二区av| 最新日韩在线视频| 3atv一区二区三区| 国产成人欧美日韩在线电影| 一区二区理论电影在线观看| 欧美一区二区播放| 成人免费高清在线观看| 午夜在线成人av| 国产亚洲欧美在线| 欧美日韩在线观看一区二区| 国产福利一区在线观看| 亚洲一二三区在线观看| 久久综合九色综合欧美就去吻 | 欧美va日韩va| 99久久99久久久精品齐齐| 强制捆绑调教一区二区| 中文字幕一区二区三区乱码在线| 在线播放中文字幕一区| 成人免费视频网站在线观看| 天堂va蜜桃一区二区三区| 国产精品久久二区二区| 日韩精品中午字幕| 91官网在线免费观看| 国产综合成人久久大片91| 亚洲一二三区视频在线观看| 久久精品视频免费| 91精品国产综合久久久久久| 99精品视频在线免费观看| 另类中文字幕网| 亚洲一区二区三区四区不卡| 中文字幕成人在线观看| 日韩一级免费观看| 欧美亚洲国产怡红院影院| 大桥未久av一区二区三区中文| 日韩精品电影在线| 亚洲精品国产成人久久av盗摄| 久久久国产精华| 日韩一区二区三区在线视频| 在线免费观看日本欧美| 成人免费观看男女羞羞视频| 激情深爱一区二区| 日韩国产精品大片| 欧美日韩国产另类一区| 懂色av中文字幕一区二区三区| 日韩中文欧美在线| 伊人婷婷欧美激情| 国产精品免费丝袜| 久久先锋影音av| 日韩欧美中文字幕一区| 欧美色手机在线观看| 91在线视频免费91| 夫妻av一区二区| 国产自产v一区二区三区c| 丝袜国产日韩另类美女| 亚洲激情一二三区| 国产精品久久久久久久久晋中 | 久久青草欧美一区二区三区| 欧美在线色视频| 91免费看片在线观看| 成人永久aaa| 国产精品一二三四| 精品在线免费视频| 免费成人在线观看| 视频精品一区二区| 午夜伦欧美伦电影理论片| 亚洲中国最大av网站| 亚洲欧美区自拍先锋| 国产精品久久久久三级| 中文字幕精品综合| 国产日韩影视精品| 久久久久久久综合色一本| 精品国产露脸精彩对白| 精品久久久网站| 欧美大片免费久久精品三p| 6080国产精品一区二区| 7777精品伊人久久久大香线蕉的 | 久久精品欧美一区二区三区不卡| 日韩午夜中文字幕| 日韩一区二区三区精品视频| 欧美高清视频www夜色资源网| 欧美影视一区在线| 欧美伊人精品成人久久综合97 | 亚洲三级小视频| 欧美激情一区在线| 国产精品嫩草影院av蜜臀| 国产精品乱码一区二三区小蝌蚪| 国产欧美精品一区aⅴ影院| 国产精品三级在线观看| 国产精品久久久久久久久久久免费看 | ㊣最新国产の精品bt伙计久久| 亚洲欧洲99久久| 亚洲精品老司机| 亚洲无人区一区| 三级在线观看一区二区| 欧美aaaaaa午夜精品| 久久精品免费观看| 激情六月婷婷久久| 成人综合婷婷国产精品久久免费| 成人中文字幕合集| 一本色道亚洲精品aⅴ| 欧美色爱综合网| 这里只有精品99re| 欧美精品一区二区三区在线 | 视频在线观看一区| 久久99精品久久久久久动态图| 国产一本一道久久香蕉| 成人黄色免费短视频| 亚洲综合免费观看高清完整版| 一区二区在线免费观看| 亚洲成av人片一区二区| 免费看黄色91| 国产精品1区2区3区在线观看| 成人av动漫网站| 欧美午夜精品久久久| 制服丝袜亚洲精品中文字幕| 久久综合网色—综合色88| 国产精品乱码妇女bbbb| 亚洲黄色在线视频|