成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron

代寫Cloud Computing Spring 代做Automatically Scaling Web

時(shí)間:2024-03-26  來源:  作者: 我要糾錯(cuò)


Cloud Computing Spring 2024
Assignment 1: Building an Automatically Scaling Web Application
Deadline: Monday, April 15, 2024
1 Aim and Scope
In this assignment we will build a small automatically scaling testbed for a (very) trivial Web
application. The goal of the assignment is to become familiar with all facets of scaling Web
applications, which will increase your understanding of the low-level / fundamental implementation
details of Cloud systems. As we have discussed in class, we could deploy such a web application
within a virtual machine or within containers. To keep things manageable on a single workstation,
or laptop, we will constrain ourselves to a single host and make use of containers. In order to
simulate saturated servers, the Web application will be single-threaded and can be rate limited on
purpose. Do note that this assignment can be implemented similarly using virtual machines, and
also can be easily extended to a distributed system consisting of multiple hosts. Both of these are
out of scope of this assignment however.
As has been discussed during the lectures, a number of components are required to create an
automatically scaling Web application. These components are summarized in the following figure:
Load
generator
("client")
Load
balancer
Web
application
container
Web
application
container
Web
application
container
...
Shared
data
internal
container
network
Scaling
controller
monitors
load balancer
instructs
container engine
to start/stop
instances
will run in container engine
As can be seen, the assignment is centered around a Web application, deployed in a container,
that can be scaled out when the incoming load requires so. A load balancer is used to balance
the load over multiple instances of the Web application (or a single instance if this is sufficient to
sustain the load). It is the responsibility of the scaling controller to monitor the incoming load
and to scale out or scale in when required. The load generator is used in experiments to evaluate
the system.
1
2 Requirements
The goal of this assignment is to create the setup as depicted in the above figure and to perform
a number of experiments. In a report that must accompany your submission, your setup needs to
be well documented, the architecture must be described, and must include the experiments that
have been conducted. As an actual Web application to scale you will be provided with a small
and trivial object store API. This application is single-threaded, can be rate limited on purpose
and can insert random delays in serving HTTP requests to mimic a server under load (both of
which are configurable). See Appendix A for details.
Within this assignment, you will be using podman as container engine. As described above,
we have chosen to use containers for this assignment because it is easier to work with than a
fully fledged virtual machine hypervisor and requires (significantly) less memory and disk space
for hosting multiple instances of the developed Web application. Some tips & tricks using podman
can be found in Appendix B.
The scaling controller and algorithm are to be designed and developed by you. You have free￾dom in designing the scaling algorithm. This algorithm can be rule-based, based on regression, or
use a sliding window, etc., etc. We recommend that you conduct a number of ‘calibration’ exper￾iments in conjunction with the development of the scaling algorithm (see also in the enumeration
below). The information obtained through calibration should be used to design and tune a scaling
algorithm for your scaling controller. For instance, design rules/thresholds for scaling decisions.
It might be worthwhile to first set a particular target response time, so to make a Service Level
Agreement, that you want to achieve also under increasing client load. Finally, the project is
concluded with a set of experiments to evaluate the effectiveness of your scaling controller.
The following table summarizes the requirements regarding the different system components:
Component Requirement
Web application Provided on Brightspace.
Container engine podman must be used.
Load balancer Required. You are allowed to write your own (simple) load balancer,
but are also free to pick an existing one such as HAProxy.
Load generator Required. You are allowed to either write your own request generator
or to use an existing one (e.g., jMeter (Java based) or Locust (Python
based)). We strongly recommend you to first design your experiments
and then see whether an existing project can be used to perform your
experiments.
Scaling controller Required. You must write your own scaling controller. Informa￾tion is to be collected from the load balancer and/or Web application
instances. The scaling decision is carried out by executing podman
commands or API calls. See also Appendix D. This scaling controller
is to be implemented in a programming language of choice.
With regard to the experiments to be conducted within the project and to be presented and
discussed in the report:
Experiment Description
Functionality test Perform a functionality experiment (or integration test) for the load
generator and container engine. This is to make sure that the basics
work well.
Calibration experi￾ments
Required. Conduct a number of calibration experiments. You likely
want to conduct these experiments in conjunction with developing your
scaling algorithm.
(a) Determine the saturation point of a single container. Play with rate
limiting and the random delays to see what effects this has on response
time (latency), request throughput, CPU utilization, etc.
2
(b) A similar experiment, with different amounts of containers, to get
an impression of how the performance would scale.
(c) Determine the time required to spawn new containers.
Final experiment Required. Conduct two meaningful final experiments that evaluate
and demonstrate the effectiveness of the automatic scaling that you
have implemented. These experiments must investigate the response
of the system to an increasing and decreasing number of client requests
as to demonstrate that the system will automatically scale out and in.
Investigate the response time of the scaling system and try to improve
this to show that the number of failed requests can be minimized or
(preferably) fully eliminated such that some SLA is adhered to. Things
to experiment with include changing the rules for scaling decisions used
by the scaling controller, optimizing the method or timing of spawning
new containers (such that they can be spawned in less time), reducing
oscillation, etc.
3 Development Environment
To work on this assignment you need administrative (root) access to a Linux installation. This
could be your own laptop or workstation, however we strongly recommend to create a dedicated
virtual machine for this assignment. You can use a setup similar to your setup for Homework 1.
Make sure to assign multiple cores to this virtual machine, this is important for the experiments.
The required disk space for the virtual machine depends on the OS you want to use within the
containers. In the case of the small Alpine OS, 4 to 5 GB should be enough, otherwise consider
10 GB. Document the Linux distribution used in your report.
4 Submission and Assessment
Teams may be formed that consist of at most two persons. In a team of two members, we
expect that both members contribute to the implementation of the system and execution of the
experiments. The deadline is Monday, April 15, 2024. Submit your assignments according to the
instructions below. In case there are problems with the team work, contact the lecturer by e-mail.
As part of the report you must list the contributions of each team member to the project.
The maximum grade that can be obtained is 10. The grade is the sum of the brackets. When
scoring each component we will consider whether the functionality is complete and works, as well
as own initiatives and ideas that clearly surpass the assignment’s requirements.
❼ [4 out of 10] Completeness and functionality of the submission.
❼ [2 out of 10] Quality of the content and layout of the report.
❼ [1 out of 10] Calibration experiments (design and report).
❼ [3 out of 10] Quality and depth of the conducted experiments to evaluate the effectiveness
of the scaling controller. This comprises the experimental design, implementation of the
necessary load generator, reporting and interpretation of the results.
Assignments must be submitted through Brightspace. For each team a single submission is
expected. Please note your names and student IDs in the text box in the submission website.
Ensure that all files that are submitted include names and student IDs.
3
The following needs to be submitted:
Web Application ❼ Listing of commands or Containerfile/Dockerfile to generate the con￾tainer image for the web application.
❼ Web application source, if modifications were made.
Load balancer ❼ Configuration file.
❼ If developed by yourself: source code.
❼ If run within a container, listing of commands or Containerfile or
Dockerfile to generate this container image.
Scaling controller ❼ Source code.
Request generator ❼ If developed by yourself: source code of the generator.
❼ Configuration files, and source code of associated extension modules
if required.
Report Report in PDF format (please no Word files), in which the following is
described:
❼ Description and explanation of the implemented architecture.
❼ Include your own system diagram that reflects your designed and
implemented architecture.
❼ Description of development environment (e.g. which Linux distribu￾tion was used in your virtual machine).
❼ Choices made during implementation.
❼ A clear explanation of the scaling policy design. Use a diagram or
concise pseudocode listing in your explanation.
❼ Report on the design and results of the ‘calibration’ experiments.
❼ Report on the design and the results of the conducted experiments
to evaluate the effectiveness of your scaling controller. This includes
motivation and implementation of a load generator.
❼ List the contributions made by each team member to the assignment,
so it is clear who worked on which parts. Note that we expect that
both team members contribute to the implementation of the system
and execution of the experiments.
Finally, please note the following:
❼ All submitted source code and reports may be subject to (automatic) plagiarism checks
using Turnitin and/or MOSS. Suspicions of fraud and plagiarism will be reported to the
Board of Examiners.
❼ The use of text or code generated by ChatGPT or other AI tools is not allowed. You are
required to implement the requested source code yourself, and to write the report yourself.
❼ We may always invite teams to elaborate on their submission in an interview in case parts
of the source code or report need further explanation.
❼ As with all other course work, keep assignment solutions to yourself. Do not post the code
on public Git or code snippet repositories where it can be found by other students. If you
use Git, make sure your repository is private.
Appendices
In these appendices we have collected some background information on the Web application that
is provided to you, a number of directions and suggestions for using Podman, and some general
tips and tricks for components of the assignment.
4
A Web Application API
As Web application you are provided with a trivial object store application based on a RESTful
API. The application is written in Python using the Flask web framework. Objects are stored
by name (key or object ID) in a directory on the file system. This object ID is unique. The
application only works with text files (so not with binary image or PDF files), keep that in mind
in case you want to experiment with large files.
To deploy the application within a container, we recommend to install Python 3 packages
within the container image first. After that, create a Python venv1 and install flask limiter
and flask restful using pip.
In order to make the containers that host the Web application stateless, an external directory
will be mounted within the containers (so all containers have access to the same external directory
and thus to the same collection of objects). To keep things simple we did not consider problems
arising from concurrent access to the objects such as race conditions (we will leave this for another
course).
The application allows rate limiting and random delays to be configured. These customization
opportunities are there to support the experiments to test the scaling functionality. For the
configuration of rate limiting, refer to the bottom of the source file of the web application. By
default random delays are introduced when serving HTTP requests, to mimic a server under load.
These can be configured (and also be disabled) using the variables just above the definition of the
function random delay.
The API implemented is as follows:
GET / Return a list of object IDs.
DELETE / Delete all objects.
GET /objs/<obj id> Return content of object with ID <obj id> or 404 if object
does not exist.
PUT /objs/<obj id> Store provided content within object with ID <obj id>.
Creates a new object if an object with this ID does not yet
exist, otherwise it overwrites the existing object.
DELETE /objs/<obj id> Delete the object with specified ID. Returns 404 if the spec￾ified object does not exist.
GET /objs/<obj id>/compress Return BZ2-compressed object in base64 encoding of object
with ID <obj id> or 404 if object does not exist.
B Working with podman
We will be using podman as container engine. podman automatically handles the virtual network
and additionally provides an easy way to store state outside of the containers in order to make
our container that hosts the Web application stateless — important to simplify scaling!
To install podman refer to your solution for Homework 1, or use the development virtual
image provided by us (see Brightspace). We strongly suggest to create all containers as the root
user, to ensure that networking performs as expected. In this case, containers will be stored in
/var/lib/containers. How much storage space you need depends on what Linux distribution
you want to use within your containers. If this is a tiny distribution like Alpine, then 200 to 300
MB storage space should be enough. If you want to use a distribution like Debian, Ubuntu or
CentOS, count on 2 to 3 GB.
B.1 Commands to create container images
podman can create containers using Dockerfiles (or Containerfiles with the same syntax). Many
use another tool called buildah (which actually came before podman) to build container images.
1https://docs.python.org/3/tutorial/venv.html
5
buildah can be installed on Ubuntu/Debian with a single apt-get install buildah, or dnf
install buildah on Red Hat derivatives. buildah can handle Dockerfiles, but can also build
images from scratch. An advantage of building images from scratch is that you have full control
and can optimize for size.
A new container image can be created from scratch by starting with a base OS image. After
image creation, commands can be run inside the container to continue configuration. For the
scaling web application, you will create containers that are based on a particular container image
(see later on).
In the interest of saving disk space, one can consider to use Alpine Linux. This is a (very)
small Linux distribution that is often used within containers. To initialize a new container image
based on Alpine Linux:
container=$(buildah from alpine)
Subsequently, we can run commands within the container2
:
buildah run $container -- ls /etc
Within the container, you might want to install additional packages (Python anyone?). The exact
instructions depend on the distribution you have selected. In case you are working with Alpine,
you can install Python 3 by executing the following commands:
buildah run $container -- apk update
buildah run $container -- apk add python3
You can search the package database as follows:
buildah run $container -- apk list ’*haproxy*’
(yes, Alpine also has HAProxy packages in case you want to create a container for your load
balancer).
Files can be copied into the container using the copy subcommand, e.g.:
buildah copy $container myfile.txt /root
Finally, it is important that the changes made are committed to a named image. In this case we
will use testcontainer as name. After commit, the image will be visible within podman as well.
buildah commit $container testcontainer
Note that the same can be achieved using a Dockerfile with RUN and COPY verbs. The command
to build containers from Dockerfiles is buildah bud.
Additional Resources For more information on buildah, refer to for instance:
https://appfleet.com/blog/everything-you-need-to-know-about-buildah/
https://github.com/containers/buildah/tree/main/docs/tutorials
B.2 Starting the Web application on container startup
Do not forget to ensure that your Web application is started on container startup. This can be
achieved by configuring a container command or entrypoint. For example, to start the Python
built-in webserver, which serves files from the specified directory, upon container start up:
buildah config --cmd "" $container
buildah config --entrypoint "python3 -m http.server 8000 --directory /tmp" $container
buildah commit $container testcontainer
2On Debian 11 this does not appear to work out of the box, the following environment variable is required:
export BUILDAH RUNTIME=/usr/bin/runc.
6
B.3 Starting container instances
From images, container instances can be started. Multiple container instances can be started
from a single image, and this is exactly what we need to ‘scale out’ the Web application. This is
achieved using the podman command. Note that this will not create full copies of the container
image, rather, the image remains read-only and an overlay file system is placed on top of it to
catch any writes.
The container for your Web application is supposed to be stateless. This implies that when
the container is no longer necessary (scale down), it can be safely deleted as no valuable data is
stored within the container. A special command line flag is present for this purpose, such that a
container is automatically deleted upon container shutdown. Using the following command, such
a container can be instantiated and the entrypoint will be launched automatically:
podman run --rm --name mycontainer testcontainer
As you can see the testcontainer image is used to create a container named mycontainer. You
will note that the container entrypoint process remains in the foreground. To avoid this, add the
-d command line flag.
The currently active containers can be inspected using the command podman ps. Now, how
can we access the Python process running on the internal container network? We need to retrieve
the internal IP address. To do so, you can use the command:
podman container inspect mycontainer
This gives a lot of information about the container. Search for IPAddress to obtain the internal
IP address. Through this IP, you should be able to access the Python webserver on port 8000.
You can also map the container’s port to the host port using the -p option. But note that in the
context of this assignment, you will only want to expose the port of the load balancer, the ports
of the Web application instances remain internal!
Finally, to stop a container you can use: podman stop.
B.4 Mounting a directory within a container
For the Web application you need to ensure that all containers have access to the same directory
in which the objects are stored. First make sure you have created such a directory on the host
(we use /srv/objects). In your container image, you want to create a mountpoint, for example
/objects. With this in place, add the following command line option to the podman run command
to mount the host directory /srv/objects within the container:
-v /srv/objects:/objects
C Suggestions on the load balancer
You may write your own load balancer or use an existing one such as HAProxy. As you may have
read above, Alpine does provide HAProxy packages. Therefore, it is relatively easy to set up a
container in which you can run the load balancer.
HAProxy allows you to read out statistics via HTTP. To enable this, you need to add a section
such as the following to the configuration file:
frontend stats
bind *:9999
stats enable
stats uri /stats
stats refresh 1s
7
After restarting HAProxy, you can connect to port 9999 to read statistics. For instance, con￾nect to http://10.0.3.6:9999/stats (of course replace the IP address with the correct one).
The following URL returns machine-readable CSV output, which will be useful for your scaling
controller: http://10.0.3.6:9999/stats;csv.
D Considerations for the scaling controller
The scaling controller consists of two parts that can be programmed independently (make good
use of your team’s resources) before these are integrated. The monitoring part should monitor the
load balancer and/or the container instances running the web application. It needs to retrieve the
information required to make scaling decisions (should we scale up, scale down? And if so by how
many instances?). Note that this refers to the Monitoring, Analysis and Planning phases of the
MAPE feedback loop. If you chose to use HAProxy, you can monitor the haproxy daemon which
has an option to provide you with statistics
The podman part needs to be capable of creating new container instances, stopping instances,
listing all instances (and their IP addresses) and all other container management utilities you
need in order to make the scaling controller work. This is the (final) Execution phase of the
MAPE loop. podman commands or API calls might be blocking by default, in which case you
want to look into asynchronous calls or multi-threading in order to make your scaling controller
more responsive. This way, you can continue monitoring, while the podman commands are being
executed in a different thread.
After instructing podman to start or stop a container, you also need to update the configuration
of the load balancer. How exactly this should be done depends on the load balancer have you
chosen to use. In the case of HAProxy there is no clear runtime API to update the list of
servers. The most straightforward way to achieve this is to have your scaling controller regenerate
the HAProxy configuration file, send this configuration to the load balancer container (or use a
volume mount?) and reload HAProxy. Hacky, but it works and it appears this is used in practice
(!).
podman can be controlled through shell commands, but also via its RESTful API. This API
is documented in detail: https://docs.podman.io/en/v3.2.3/_static/api.html.
While you can target this RESTful API directly, fortunately also language bindings exist for at
least Python and Go. We give a small example of the Python API. Before you can use this API,
the module needs to be installed with pip3 install podman.
Obtain a list of names of defined containers:
from podman import PodmanClient
client = PodmanClient(base_url="unix:///run/podman/podman.sock")
l = [c.name for c in client.containers.list()]
Get a handle on a container and if it is running print the IP addresses of this container3
:
from podman import PodmanClient
client = PodmanClient(base_url="unix:///run/podman/podman.sock")
c = client.containers.get("testcontainer")
if c.status == ’running’:
print(c.attrs[’NetworkSettings’][’Networks’][’podman’][’IPAddress’])
Containers can be stopped with the .stop() method. The method .wait(condition=’running’)
waits (blocking) until a container is running.
3The podman in the dictionary access is the name of the default container network, see the command podman
network ls.
8
The Python module has extensive documentation, use the help command in an interactive Python
shell. We could not find this documentation online.
E Testing the system
In order to test the completed system, you want to use a HTTP load generator. Options are
writing such a generator yourself, jMeter, Locust or something else you find to be suitable. For
your experiments it is important to first design your experiments and after that decide on a load or
traffic pattern generator to use. You do not want to be limited in your experiments by a previously
selected load generator.
Locust is Python-based and works as a command-line utility. First, you need to write a locust file
(refer to the website https://docs.locust.io/en/stable/quickstart.html for an example).
After that you can start locust:
locust -f mylocustfile.py --headless -u 10 -t 300s -r 0.5
Where -u configures the number of (concurrent) users, -t configures the runtime of the experiment
and -r configures the rate in which clients are spawned. The values for the parameters that are
given are just examples, you should set up your own experiment.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 














 

標(biāo)簽:

掃一掃在手機(jī)打開當(dāng)前頁
  • 上一篇:菲律賓機(jī)場有什么保關(guān)服務(wù)?有哪些機(jī)場?華商簽證分享
  • 下一篇:返回列表
  • 無相關(guān)信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風(fēng)景名勝區(qū)
    昆明西山國家級風(fēng)景名勝區(qū)
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗(yàn)證碼平臺(tái) 幣安官網(wǎng)下載 歐冠直播 WPS下載

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    成a人片国产精品_色悠悠久久综合_国产精品美女久久久久久2018_日韩精品一区二区三区中文精品_欧美亚洲国产一区在线观看网站_中文字幕一区在线_粉嫩一区二区三区在线看_国产亚洲欧洲997久久综合_不卡一区在线观看_亚洲欧美在线aaa_久久99精品国产_欧美卡1卡2卡_国产精品你懂的_日韩精品91亚洲二区在线观看_国内一区二区视频_91丨国产丨九色丨pron
    国产一区二区成人久久免费影院 | 亚洲国产精品影院| 日韩高清电影一区| 成人国产精品免费网站| 久久精品国产亚洲a| 不卡一二三区首页| 日韩欧美色综合网站| 亚洲综合在线五月| 91污在线观看| 国产欧美日韩精品一区| 免费看黄色91| 欧美日韩综合色| 中文字幕亚洲成人| 国产91丝袜在线18| 2022国产精品视频| 免费观看91视频大全| 欧美在线你懂得| 亚洲欧美日韩国产手机在线| 国产成人亚洲综合色影视| 欧美电视剧免费全集观看| 午夜国产精品一区| 欧美色大人视频| 一区二区三区鲁丝不卡| av欧美精品.com| 精品国产免费一区二区三区香蕉| 视频一区中文字幕国产| 91在线免费看| 国产精品丝袜久久久久久app| 日本不卡一二三| 欧美性做爰猛烈叫床潮| 亚洲精品免费播放| 成人黄色777网| 欧美激情一区二区三区蜜桃视频 | 亚洲国产乱码最新视频| 国产精品自拍毛片| 精品国产一二三区| 亚洲欧美日韩国产手机在线| 一本一道久久a久久精品综合蜜臀 一本一道综合狠狠老 | 欧美高清在线精品一区| 美女网站在线免费欧美精品| 欧美军同video69gay| 国产日产欧美精品一区二区三区| 三级不卡在线观看| 欧美色精品天天在线观看视频| 一区二区三区日韩| 欧洲一区在线电影| 亚洲曰韩产成在线| 欧美日韩你懂的| 曰韩精品一区二区| 色诱亚洲精品久久久久久| 亚洲宅男天堂在线观看无病毒| 99国产精品久久久久久久久久| 日韩欧美国产麻豆| 精品一二线国产| 日本一区二区三区在线不卡| 一区二区三区久久| 在线观看国产日韩| 亚洲成人免费视| 欧美精品tushy高清| 琪琪久久久久日韩精品| 久久中文字幕电影| 不卡av在线网| 亚洲永久精品大片| 欧美一级日韩一级| 国产成人精品三级麻豆| 亚洲乱码一区二区三区在线观看| 精品视频999| 精一区二区三区| 国产精品五月天| 在线亚洲一区二区| 蜜臂av日日欢夜夜爽一区| 国产色产综合产在线视频| 91天堂素人约啪| 日韩不卡手机在线v区| 久久久精品免费观看| 色综合中文字幕国产| 亚洲一级不卡视频| 日韩免费福利电影在线观看| 懂色一区二区三区免费观看| 亚洲欧美区自拍先锋| 91精品久久久久久久91蜜桃| 国内成+人亚洲+欧美+综合在线| 中文字幕一区三区| 最新日韩av在线| 欧美精品亚洲一区二区在线播放| 经典三级在线一区| 亚洲男人天堂一区| 日韩一区二区三区av| 成人免费黄色大片| 午夜精品一区二区三区免费视频 | 久久久一区二区| 91一区一区三区| 日本美女一区二区| 国产精品免费视频一区| 欧美日韩在线播放三区四区| 国产尤物一区二区在线| 亚洲精品老司机| 26uuu另类欧美| 在线视频一区二区免费| 精彩视频一区二区| 亚洲一区二区影院| 国产欧美一区二区三区在线看蜜臀| 欧美性大战久久久久久久蜜臀 | 欧美一区二区三区啪啪| 成人精品小蝌蚪| 日本不卡视频一二三区| ...xxx性欧美| 精品毛片乱码1区2区3区| 色婷婷精品大视频在线蜜桃视频| 久久av老司机精品网站导航| 一区二区三区四区乱视频| 国产亚洲精品精华液| 3d动漫精品啪啪一区二区竹菊 | 国产日韩欧美一区二区三区综合| 欧美三级日韩在线| 成人h版在线观看| 久久精品999| 亚洲国产精品欧美一二99| 国产欧美日本一区二区三区| 91麻豆精品国产自产在线观看一区 | 日韩一区二区免费电影| 日本韩国一区二区| 国产成人av福利| 蜜臀a∨国产成人精品| 亚洲综合清纯丝袜自拍| 欧美国产精品久久| 欧美tickling挠脚心丨vk| 欧美视频在线不卡| 97se狠狠狠综合亚洲狠狠| 国产一区二区三区美女| 日韩电影免费在线观看网站| 一区二区三区蜜桃| 成人欧美一区二区三区视频网页| 久久日韩精品一区二区五区| 91精品国产福利在线观看| 在线免费观看日本一区| av资源站一区| 豆国产96在线|亚洲| 国产综合久久久久影院| 欧美a级一区二区| 亚洲综合男人的天堂| 亚洲女与黑人做爰| 国产精品久久久久aaaa樱花| 国产午夜精品一区二区三区视频 | 美女视频黄久久| 天天爽夜夜爽夜夜爽精品视频| 一区二区在线免费观看| 1区2区3区精品视频| 国产精品色呦呦| 国产欧美精品在线观看| 久久免费电影网| 久久久亚洲高清| 久久免费精品国产久精品久久久久| 欧美成人综合网站| 日韩欧美美女一区二区三区| 91精品国产欧美日韩| 欧美一级日韩不卡播放免费| 香蕉加勒比综合久久| 亚洲制服丝袜一区| 亚洲黄色片在线观看| 亚洲精品视频观看| 亚洲三级免费电影| 亚洲视频在线观看一区| 亚洲视频一二三| 亚洲欧美精品午睡沙发| 亚洲欧美成aⅴ人在线观看 | 丁香一区二区三区| 国产精品18久久久久久久久 | 色av成人天堂桃色av| 91视频国产资源| 91丨九色丨黑人外教| proumb性欧美在线观看| 91在线视频官网| 欧美在线999| 欧美猛男超大videosgay| 欧美日韩成人综合天天影院| 欧美日本一区二区三区四区| 91精品国产麻豆| 2023国产精品| 国产女人水真多18毛片18精品视频| 国产女主播在线一区二区| 中文字幕制服丝袜一区二区三区| 亚洲视频一区二区在线| 亚洲一卡二卡三卡四卡| 日本不卡免费在线视频| 国产一区在线视频| 成人一区二区三区在线观看| jizzjizzjizz欧美| 91美女片黄在线观看91美女| 欧美在线|欧美| 欧美一区二区三区在线电影| 精品国产乱码91久久久久久网站| 久久奇米777| 亚洲人成精品久久久久| 亚洲第一福利一区| 久久激五月天综合精品| 成人黄色a**站在线观看| 欧美影视一区二区三区| 日韩视频国产视频| 国产人妖乱国产精品人妖|