成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron

代做COMP9414、代寫C++,Java程序語言

時間:2024-06-20  來源:  作者: 我要糾錯



COMP9414 24T2
Artificial Intelligence
Assignment 1 - Artificial neural networks
Due: Week 5, Wednesday, 26 June 2024, 11:55 PM.
1 Problem context
Time Series Air Quality Prediction with Neural Networks: In this
assignment, you will delve into the realm of time series prediction using neural
network architectures. You will explore both classification and estimation
tasks using a publicly available dataset.
You will be provided with a dataset named “Air Quality,” [1] available
on the UCI Machine Learning Repository 1. We tailored this dataset for this
assignment and made some modifications. Therefore, please only use the
attached dataset for this assignment.
The given dataset contains 8,358 instances of hourly averaged responses
from an array of five metal oxide chemical sensors embedded in an air qual-
ity chemical multisensor device. The device was located in the field in a
significantly polluted area at road level within an Italian city. Data were
recorded from March 2004 to February 2005 (one year), representing the
longest freely available recordings of on-field deployed air quality chemical
sensor device responses. Ground truth hourly averaged concentrations for
carbon monoxide, non-methane hydrocarbons, benzene, total nitrogen ox-
ides, and nitrogen dioxide among other variables were provided by a co-
located reference-certified analyser. The variables included in the dataset
1https://archive.ics.uci.edu/dataset/360/air+quality
1
are listed in Table 1. Missing values within the dataset are tagged
with -200 value.
Table 1: Variables within the dataset.
Variable Meaning
CO(GT) True hourly averaged concentration of carbon monoxide
PT08.S1(CO) Hourly averaged sensor response
NMHC(GT) True hourly averaged overall Non Metanic HydroCar-
bons concentration
C6H6(GT) True hourly averaged Benzene concentration
PT08.S2(NMHC) Hourly averaged sensor response
NOx(GT) True hourly averaged NOx concentration
PT08.S3(NOx) Hourly averaged sensor response
NO2(GT) True hourly averaged NO2 concentration
PT08.S4(NO2) Hourly averaged sensor response
PT08.S5(O3) Hourly averaged sensor response
T Temperature
RH Relative Humidity
AH Absolute Humidity
2 Activities
This assignment focuses on two main objectives:
? Classification Task: You should develop a neural network that can
predict whether the concentration of Carbon Monoxide (CO) exceeds
a certain threshold – the mean of CO(GT) values – based on historical
air quality data. This task involves binary classification, where your
model learns to classify instances into two categories: above or below
the threshold. To determine the threshold, you must first calculate
the mean value for CO(GT), excluding unknown data (missing values).
Then, use this threshold to predict whether the value predicted by your
network is above or below it. You are free to choose and design your
own network, and there are no limitations on its structure. However,
your network should be capable of handling missing values.
2
? Regression Task: You should develop a neural network that can pre-
dict the concentration of Nitrogen Oxides (NOx) based on other air
quality features. This task involves estimating a continuous numeri-
cal value (NOx concentration) from the input features using regression
techniques. You are free to choose and design your own network and
there is no limitation on that, however, your model should be able to
deal with missing values.
In summary, the classification task aims to divide instances into two cat-
egories (exceeding or not exceeding CO(GT) threshold), while the regression
task aims to predict a continuous numerical value (NOx concentration).
2.1 Data preprocessing
It is expected you analyse the provided data and perform any required pre-
processing. Some of the tasks during preprocessing might include the ones
shown below; however, not all of them are necessary and you should evaluate
each of them against the results obtained.
(a) Identify variation range for input and output variables.
(b) Plot each variable to observe the overall behaviour of the process.
(c) In case outliers or missing data are detected correct the data accord-
ingly.
(d) Split the data for training and testing.
2.2 Design of the neural network
You should select and design neural architectures for addressing both the
classification and regression problem described above. In each case, consider
the following steps:
(a) Design the network and decide the number of layers, units, and their
respective activation functions.
(b) Remember it’s recommended your network accomplish the maximal
number of parameters Nw < (number of samples)/10.
(c) Create the neural network using Keras and TensorFlow.
3
2.3 Training
In this section, you have to train your proposed neural network. Consider
the following steps:
(a) Decide the training parameters such as loss function, optimizer, batch
size, learning rate, and episodes.
(b) Train the neural model and verify the loss values during the process.
(c) Verify possible overfitting problems.
2.4 Validating the neural model
Assess your results plotting training results and the network response for the
test inputs against the test targets. Compute error indexes to complement
the visual analysis.
(a) For the classification task, draw two different plots to illustrate your
results over different epochs. In the first plot, show the training and
validation loss over the epochs. In the second plot, show the training
and validation accuracy over the epochs. For example, Figure 1 and
Figure 2 show loss and classification accuracy plots for 100 epochs,
respectively.
Figure 1: Loss plot for the classifica-
tion task
Figure 2: Accuracy plot for the clas-
sification task
4
(b) For the classification task, compute a confusion matrix 2 including True
Positive (TP), True Negative (TN), False Positive (FP), and False Neg-
ative (FN), as shown in Table 2. Moreover, report accuracy and pre-
cision for your test data and mention the number of tested samples as
shown in Table 3 (the numbers shown in both tables are randomly cho-
sen and may not be consistent with each other). For instance, Sklearn
library offers a various range of metric functions 3, including confusion
matrix 4, accuracy, and precision. You can use Sklearn in-built met-
ric functions to calculate the mentioned metrics or develop your own
functions.
Table 2: Confusion matrix for the test data for the classification task.
Confusion Matrix Positive (Actual) Negative (Actual)
Positive (Predicted) 103 6
Negative (Predicted) 6 75
Table 3: Accuracy and precision for the test data for the classification task.
Accuracy Precision Number of Samples
CO(GT) classification 63% 60% 190
(c) For the regression task, draw two different plots to illustrate your re-
sults. In the first plot, show how the selected loss function varies for
both the training and validation through the epochs. In the second
plot, show the final estimation results for the validation test. For in-
stance, Figure 3 and Figure 4 show the loss function and the network
outputs vs the actual NOx(GT) values for a validation test, respec-
tively. In Figure 4 no data preprocessing has been performed, however,
as mentioned above, it is expected you include this in your assignment.
(d) For the regression task, report performance indexes including the Root
Mean Squared Error (RMSE), Mean Absolute Error (MAE) (see a
discussion on [2]), and the number of samples for your estimation of
2https://en.wikipedia.org/wiki/Confusion matrix
3https://scikit-learn.org/stable/api/sklearn.metrics.html
4https://scikitlearn.org/stable/modules/generated/sklearn.metrics.confusion matrix.html
5
Figure 3: Loss plot for the re-
gression task.
Figure 4: Estimated and actual NOx(GT)
for the validation set.
NOx(GT) values in a table. Root Mean Squared Error (RMSE) mea-
sures the differences between the observed values and predicted ones
and is defined as follows:
RMSE =

1
n
Σi=ni=1 (Yi ? Y?i)2, (1)
where n is the number of our samples, Yi is the actual label and Y?i
is the predicted value. In the same way, MAE can be defined as the
absolute average of errors as follows:
MAE =
1
n
Σi=ni=1 |Yi ? Y?i|. (2)
Table 4 shows an example of the performance indexes (all numbers are
randomly chosen and may not be consistent with each other). As men-
tioned before, Sklearn library offers a various range of metric functions,
including RMSE5 and MAE 6. You can use Sklearn in-built metric func-
tions to calculate the mentioned metrics or develop your own functions.
Table 4: Result table for the test data for the regression task.
RMSE MAE Number of Samples
90.60 50.35 55
5https://scikit-learn.org/stable/modules/generated/sklearn.metrics.root mean squared error.html
6https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean absolute error.html
6
3 Testing and discussing your code
As part of the assignment evaluation, your code will be tested by tutors along
with you in a discussion session carried out in the tutorial session in week 6.
The assignment has a total of 25 marks. The discussion is mandatory and,
therefore, we will not mark any assignment not discussed with tutors.
You are expected to propose and build neural models for classification
and regression tasks. The minimal output we expect to see are the results
mentioned above in Section 2.4. You will receive marks for each of these
subsection as shown in Table 5, i.e. 7 marks in total. However, it’s fine if
you want to include any other outcome to highlight particular aspects when
testing and discussing your code with your tutor.
For marking your results, you should be prepared to simulate your neural
model with a generalisation set we have saved apart for that purpose. You
must anticipate this by including in your submission a script ready to open
a file (with the same characteristics as the given dataset but with fewer data
points), simulate the network, and perform all the validation tests described
in Section 2.4 (b) and (d) (accuracy, precision, RMSE, MAE). It is recom-
mended to save all of your hyper-parameters and weights (your model in
general) so you can call your network and perform the analysis later in your
discussion session.
As for the classification task, you need to compute accuracy and precision,
while for the regression task RMSE and MAE using the generalisation set.
You will receive 3 marks for each task, given successful results. Expected
results should be as follows:
? For the classification task, your network should achieve at least 85%
accuracy and precision. Accuracy and precision lower than that will
result in a score of 0 marks for that specific section.
? For the regression task, it is expected to achieve an RMSE of at most
280 and an MAE of 220 for unseen data points. Errors higher than the
mentioned values will be marked as 0 marks.
Finally, you will receive 1 mark for code readability for each task, and
your tutor will also give you a maximum of 5 marks for each task depending
on the level of code understanding as follows: 5. Outstanding, 4. Great,
3. Fair, 2. Low, 1. Deficient, 0. No answer.
7
Table 5: Marks for each task.
Task Marks
Results obtained with given dataset
Loss and accuracy plots for classification task 2 marks
Confusion matrix and accuracy and precision tables for classifi-
cation task
2 marks
Loss and estimated NOx(GT) plots for regression task 2 marks
Performance indexes table for regression task 1 mark
Results obtained with generalisation dataset
Accuracy and precision for classification task 3 marks
RMSE and MAE for regression task 3 marks
Code understanding and discussion
Code readability for classification task 1 mark
Code readability for regression task 1 mark
Code understanding and discussion for classification task 5 mark
Code understanding and discussion for regression task 5 mark
Total marks 25 marks
4 Submitting your assignment
The assignment must be done individually. You must submit your assignment
solution by Moodle. This will consist of a single .ipynb Jupyter file. This file
should contain all the necessary code for reading files, data preprocessing,
network architecture, and result evaluations. Additionally, your file should
include short text descriptions to help markers better understand your code.
Please be mindful that providing clean and easy-to-read code is a part of
your assignment.
Please indicate your full name and your zID at the top of the file as a
comment. You can submit as many times as you like before the deadline –
later submissions overwrite earlier ones. After submitting your file a good
practice is to take a screenshot of it for future reference.
Late submission penalty: UNSW has a standard late submission
penalty of 5% per day from your mark, capped at five days from the as-
sessment deadline, after that students cannot submit the assignment.
8
5 Deadline and questions
Deadline: Week 5, Wednesday 26 June of June 2024, 11:55pm. Please
use the forum on Moodle to ask questions related to the project. We will
prioritise questions asked in the forum. However, you should not share your
code to avoid making it public and possible plagiarism. If that’s the case,
use the course email cs9414@cse.unsw.edu.au as alternative.
Although we try to answer questions as quickly as possible, we might take
up to 1 or 2 business days to reply, therefore, last-moment questions might
not be answered timely.
6 Plagiarism policy
Your program must be entirely your own work. Plagiarism detection software
might be used to compare submissions pairwise (including submissions for
any similar projects from previous years) and serious penalties will be applied,
particularly in the case of repeat offences.
Do not copy from others. Do not allow anyone to see your code.
Please refer to the UNSW Policy on Academic Honesty and Plagiarism if you
require further clarification on this matter.
References
[1] De Vito, S., Massera, E., Piga, M., Martinotto, L. and Di Francia, G.,
2008. On field calibration of an electronic nose for benzene estimation in an
urban pollution monitoring scenario. Sensors and Actuators B: Chemical,
129(2), pp.750-757.
[2] Hodson, T. O. 2022. Root mean square error (RMSE) or mean absolute
error (MAE): When to use them or not. Geoscientific Model Development
Discussions, 2022, 1-10.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp













 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:代寫指標編寫 編寫同花順指標公式 代編公式
  • 下一篇:ECON2101代做、代寫Python/c++設計編程
  • CMT219代寫、代做Java程序語言
  • 代做MATH1033、代寫c/c++,Java程序語言
  • 代做CSCI 2525、c/c++,Java程序語言代寫
  • COMP 315代寫、Java程序語言代做
  • 昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網下載 歐冠直播 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron
    欧美体内she精高潮| 56国语精品自产拍在线观看| 91久久精品一区二区三| 精品国产伦一区二区三区观看体验| 亚洲欧美偷拍卡通变态| 久久99久久精品欧美| 色综合天天综合色综合av| 欧美精品一区二区三区四区| 亚洲第一av色| aaa亚洲精品| 久久欧美一区二区| 日本欧美肥老太交大片| 色一情一伦一子一伦一区| 国产日韩三级在线| 另类人妖一区二区av| 欧美日韩一级视频| 亚洲免费av观看| 成人app在线| 26uuu精品一区二区在线观看| 日韩激情av在线| 在线观看视频91| 亚洲欧美精品午睡沙发| 99热这里都是精品| 中文字幕精品一区二区精品绿巨人 | 成人免费在线视频| 成人晚上爱看视频| 日本一区二区三区久久久久久久久不 | 国产一区二区三区电影在线观看 | 精品免费视频.| 蜜桃av一区二区| 8x8x8国产精品| 五月激情六月综合| 欧美日韩高清一区二区三区| 亚洲一级不卡视频| 欧美性色黄大片| 午夜欧美电影在线观看| 欧美日韩在线电影| 亚洲一区二区三区在线| 91成人在线免费观看| 一区二区激情小说| 欧美在线观看一二区| 一区二区在线看| 一本高清dvd不卡在线观看| 亚洲欧美另类小说视频| 欧美性受xxxx黑人xyx性爽| 一二三区精品福利视频| 欧美老年两性高潮| 石原莉奈在线亚洲二区| 日韩亚洲国产中文字幕欧美| 久久99国产精品尤物| 久久精品欧美日韩精品| 成人国产精品免费观看视频| 亚洲欧洲成人精品av97| 欧洲日韩一区二区三区| 五月激情丁香一区二区三区| 欧美一级视频精品观看| 久久99在线观看| 国产亚洲欧美一区在线观看| 成人性生交大片免费看视频在线| 中文字幕乱码亚洲精品一区| 色综合久久88色综合天天6| 亚洲高清不卡在线| 日韩欧美一区二区在线视频| 激情成人综合网| 国产精品网曝门| 欧洲一区二区三区在线| 蜜臀av一区二区在线观看| 国产网站一区二区三区| 91色porny在线视频| 亚洲成人精品在线观看| 日韩欧美一区在线观看| 丰满少妇久久久久久久| 亚洲激情一二三区| 日韩欧美一区二区不卡| 成人av电影在线播放| 亚洲成人自拍网| 欧美精品一区二区三区四区| 不卡在线视频中文字幕| 亚洲国产成人91porn| 欧美va亚洲va国产综合| 成人精品视频一区二区三区 | 96av麻豆蜜桃一区二区| 亚洲电影视频在线| 精品盗摄一区二区三区| 91在线视频观看| 日本成人中文字幕在线视频| 久久精子c满五个校花| 色婷婷综合久久| 蜜臀av性久久久久蜜臀av麻豆| 国产欧美一区二区三区沐欲| 欧美亚洲免费在线一区| 狠狠网亚洲精品| 亚洲免费视频成人| 日韩欧美国产电影| 99久久99久久久精品齐齐| 日本中文字幕不卡| 国产精品美日韩| 欧美一区二区在线免费播放| 顶级嫩模精品视频在线看| 亚洲国产一区在线观看| 久久久久久久久久久电影| 欧美午夜精品电影| 国产91在线看| 日韩国产欧美视频| 国产精品的网站| 日韩精品自拍偷拍| 91国偷自产一区二区三区成为亚洲经典| 蜜桃视频一区二区| 国产精品传媒在线| 精品88久久久久88久久久| 一本色道久久综合狠狠躁的推荐| 美女在线视频一区| 亚洲综合视频在线| 中文字幕不卡在线播放| 日韩欧美国产午夜精品| 91国偷自产一区二区三区成为亚洲经典 | 色婷婷亚洲婷婷| 国产精品一级在线| 亚洲午夜在线观看视频在线| 国产精品天美传媒| 精品国精品国产| 欧美日韩午夜在线| 99国产精品一区| 国产精品综合一区二区三区| 五月天国产精品| 亚洲精品菠萝久久久久久久| 日本一区二区三区电影| 欧美mv和日韩mv国产网站| 欧美三区在线观看| 99热这里都是精品| 国产成人一区在线| 久久精品国产77777蜜臀| 亚洲香蕉伊在人在线观| 亚洲日本欧美天堂| 日本一二三四高清不卡| 337p日本欧洲亚洲大胆色噜噜| 欧美日韩一级二级| 91色视频在线| 成人av资源下载| 国产高清在线观看免费不卡| 看国产成人h片视频| 视频一区二区不卡| 亚洲v中文字幕| 夜夜嗨av一区二区三区中文字幕| 一区精品在线播放| 国产精品嫩草99a| 国产亚洲欧美一区在线观看| 欧美精品一区二区三区蜜桃| 91精品国产入口| 欧美午夜不卡在线观看免费| 色偷偷88欧美精品久久久| 波多野结衣在线一区| 国产传媒久久文化传媒| 精品一区二区在线免费观看| 日本午夜一本久久久综合| 亚洲午夜久久久久久久久久久 | 国产成人无遮挡在线视频| 蜜臀va亚洲va欧美va天堂 | 久久精品噜噜噜成人88aⅴ| 亚洲一卡二卡三卡四卡五卡| 一区二区在线免费| 一区二区三区在线视频播放| 亚洲欧美偷拍另类a∨色屁股| 综合在线观看色| 亚洲人妖av一区二区| 亚洲欧美国产77777| 中文字幕一区不卡| 中文字幕在线观看不卡视频| 国产精品不卡在线| 亚洲同性同志一二三专区| 中文字幕亚洲一区二区av在线 | 午夜精品一区在线观看| 亚洲高清免费观看| 午夜视频久久久久久| 日韩电影免费一区| 另类小说一区二区三区| 久久精品99国产国产精| 国内成人精品2018免费看| 国产一区二区久久| 国产成人精品一区二区三区网站观看| 国产精品一区不卡| 成人免费观看视频| 91免费观看国产| 欧美艳星brazzers| 欧美精品乱人伦久久久久久| 91精品国产综合久久小美女| 日韩午夜在线影院| 久久精品水蜜桃av综合天堂| 国产精品另类一区| 一区二区三区欧美亚洲| 亚洲成av人片一区二区梦乃| 人妖欧美一区二区| 国产精品白丝av| 91在线免费看| 欧美在线观看视频在线| 91精品婷婷国产综合久久| 亚洲精品在线电影| 国产精品久久久久久久久晋中| 一区二区三区国产精华| 日本欧美韩国一区三区|