YI-ZENG | 曾祎

Ph.D. Candidate | yizeng@vt.edu

Yi Zeng | 曾祎

Welcome! I am a Ph.D. Candidate at Virginia Tech, under the guidance of Prof. Ruoxi Jia. My research focus is committed to weaving a narrative where innovation in AI is intrinsically tied with safety and ethical considerations, ensuring a future where technology is a responsible and reliable ally to humanity. As the recipient of the Amazon Fellowship, I am fortunate to have the support and resources to pursue vital work ensuring ethical and responsible uses of AI. As team players, we can positively impact the future of AI together!

Before that, I obtained my master’s degree in Machine Learning and Data Science at the Jacobs School of Engineering, University of California, San Diego, with Prof. Farinaz Koushanfar. My undergrad thesis, “Deep-Full-Range”, adopting deep learning for network intrusion detection, supervised by Prof. Huaxi Gu, was honored with the best degree paper award at Xidian University. Since 2018, I have been increasingly focused on security & privacy issues related to AI and closely worked with Prof. Han Qiu, Prof. Tianwei Zhang, and Prof. Meikang Qiu.

Please find my CV here (last updated in Nov. 2023)

NEWS

CONFERENCES

Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model. However, the security of the synthetic or out-of-distribution (OOD) data required in data-free KD is largely unknown and under-explored. In this work, we make the first effort to uncover the security risk of data-free KD w.r.t. untrusted pre-trained models. We then propose Anti-Backdoor Data-Free KD (ABD), the first plug-in defensive method for data-free KD methods to mitigate the chance of potential backdoors being transferred. We empirically evaluate the effectiveness of our proposed ABD in diminishing transferred backdoor knowledge while maintaining compatible downstream performances as the vanilla KD. We envision this work as a milestone for alarming and mitigating the potential backdoors in data-free KD.

Recently, there has been a growing attention in image generation models. However, concerns have emerged regarding potential misuse and intellectual property (IP) infringement associated with these models. Therefore, it is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model, i.e., origin attribution. Existing methods are limited in their applicability to specific types of generative models and require additional steps during training or generation. This restricts their use with pre-trained models that lack these specific operations and may compromise the quality of image generation. To overcome this problem, we first develop an alteration-free and model-agnostic origin attribution method via input reverse-engineering on image generation models, i.e., inverting the input of a particular model for a specific image. Given a particular model, we first analyze the differences in the hardness of reverse-engineering tasks for the generated images of the given model and other images. Based on our analysis, we propose a method that utilizes the reconstruction loss of reverse-engineering to infer the origin. Our proposed method effectively distinguishes between generated images from a specific generative model and other images, including those generated by different models and real images.

Backdoor attacks insert malicious data into a training set so that, during inference time, it misclassifies inputs that have been patched with a backdoor trigger as the malware specified label. For backdoor attacks to bypass human inspection, it is essential that the injected data appear to be correctly labeled. The attacks with such property are often referred to as “clean-label attacks.” Existing clean-label backdoor attacks require knowledge of the entire training set to be effective. Obtaining such knowledge is difficult or impossible because training data are often gathered from multiple sources (e.g., face images from different users). It remains a question whether backdoor attacks still present a real threat.


This paper provides an affirmative answer to this question by designing an algorithm to mount clean-label backdoor attacks based only on the knowledge of representative examples from the target class. With poisoning equal to or less than 0.5% of the target-class data and 0.05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger. Our attack works well across datasets and models, even when the trigger presents in the physical world.


We explore the space of defenses and find that, surprisingly, our attack can evade the latest state-of-the-art defenses in their vanilla form, or after a simple twist, we can adapt to the downstream defenses. We study the cause of the intriguing effectiveness and find that because the trigger synthesized by our attack contains features as persistent as the original semantic features of the target class, any attempt to remove such triggers would inevitably hurt the model accuracy first.

Backdoor data detection is traditionally studied in an end-to-end supervised learning (SL) setting. However, recent years have seen the proliferating adoption of self-supervised learning (SSL) and transfer learning (TL), due to their lesser need for labeled data. Successful backdoor attacks have also been demonstrated in these new settings. However, we lack a thorough understanding of the applicability of existing detection methods across a variety of learning settings. By evaluating 56 attack settings, we show that the performance of most existing detection methods varies significantly across different attacks and poison ratios, and all fail on the state-of-the-art clean-label backdoor attack which only manipulates a few training data’s features with imperceptible noise without changing labels. In addition, existing methods either become inapplicable or suffer large performance losses when applied to SSL and TL. We propose a new detection method called Active Separation-via Offset (ASSET), which actively induces different model behaviors between the backdoor and clean samples to promote their separation. We also provide procedures to adaptively select the number of suspicious points to remove. In the end-to-end SL setting, ASSET is superior to existing methods in terms of consistency of defensive performance across different attacks and robustness to changes in poison ratios; in particular, it is the only method that can detect the state-of-the-art clean-label attack. Moreover, ASSET’s average detection rates are higher than the best existing methods in SSL and TL, respectively, by 69.3% and 33.2%, thus providing the first practical backdoor defense for these emerging DL settings.

External data sources are increasingly being used to train machine learning (ML) models as the data demand increases. However, the integration of external data into training poses data poisoning risks, where malicious providers manipulate their data to compromise the utility or integrity of the model. Most data poisoning defenses assume access to a set of clean data (referred to as the base set), which could be obtained through trusted sources. But it also becomes common that entire data sources for an ML task are untrusted (e.g., learning from Internet data). In this case, one needs to identify a subset within a contaminated dataset as the base set in order to support these defenses.

This paper starts by examining the performance of defenses when poisoned samples are mistakenly mixed into the base set. We analyze five representative defenses that use base sets and find that their performance deteriorates dramatically with less than 1% poisoned points in the base set. These findings suggest that sifting out a base set with high precision is key to these defenses’ performance. Motivated by these observations, we study how precise existing automated tools and human inspection are at identifying clean data in the presence of data poisoning. Unfortunately, neither effort achieves the precision needed that enables effective defenses. Worse yet, many of the outcomes of these methods are worse than random selection.

In addition to uncovering the challenge, we take a step further and propose a practical countermeasure, Meta-Sift. Our method is based on the insight that existing poisoning attacks use data manipulation techniques that cause shifts from clean data distributions. Hence, training on the clean portion of a poisoned dataset and testing on the corrupted portion will result in high prediction loss. Leveraging the insight, we formulate a bilevel optimization to identify clean data and further introduce a suite of techniques to improve efficiency and precision of the identification. Our evaluation shows that Meta-Sift can sift a clean base set with 100% precision under a wide range of poisoning threats. The selected base set is large enough to give rise to successful defense when plugged into the existing defense techniques.

In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.

 

Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. (1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. (2) We develop a novel method to value individual data based on the sensitivity analysis of the class-wise Wasserstein distance. Importantly, these values can be directly obtained for free from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. (3) We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a significant improvement over the state-of-the-art performance while being orders of magnitude faster.

 

Previous works have validated that text generation APIs can be stolen through imitation attacks, causing IP violations. In order to protect the IP of text generation APIs, recent work has introduced a watermarking algorithm and utilized the null-hypothesis test as a post-hoc ownership verification on the imitation models. However, we find that it is possible to detect those watermarks via sufficient statistics of the frequencies of candidate watermarking words. To address this drawback, in this paper, we propose a novel Conditional wATERmarking framework (CATER) for protecting the IP of text generation APIs. An optimization method is proposed to decide the watermarking rules that can minimize the distortion of overall word distributions while maximizing the change of conditional word selections. Theoretically, we prove that it is infeasible for even the savviest attacker (they know how CATER works) to reveal the used watermarks from a large pool of potential word pairs based on statistical inspection. Empirically, we observe that high-order conditions lead to an exponential growth of suspicious (unused) watermarks, making our crafted watermarks more stealthy. In addition, CATER can effectively identify IP infringement under architectural mismatch and cross-domain imitation attacks, with negligible impairments on the generation quality of victim APIs. We envision our work as a milestone for stealthily protecting the IP of text generation APIs.

 

 

We propose a minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data. This formulation encompasses much of prior work on backdoor removal. We propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm to solve the minimax. Unlike previous work, which breaks down the minimax into separate inner and outer problems, our algorithm utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. We theoretically analyze its convergence and the generalizability of the robustness gained by solving minimax on clean data to unseen test data. In our evaluation, we compare I-BAU with six state-of-art backdoor defenses on eleven backdoor attacks over two datasets and various attack settings, including the common setting where the attacker targets one class as well as important but underexplored settings where multiple classes are targeted. I-BAU’s performance is comparable to and most often significantly better than the best baseline. Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size. Moreover, I-BAU requires less computation to take effect; particularly, it is more than 13 × faster than the most efficient baseline in the single-target attack setting. Furthermore, it can remain effective in the extreme case where the defender can only access 100 clean samples—a setting where all the baselines fail to produce acceptable results.

 

 

 

Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers’ and defenders’ sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50% without prior knowledge of the attack details and the target model. Acknowledging previous attacks’ weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning.

 

Public resources and services (e.g., datasets, training platforms, pre-trained models) have been widely adopted to ease the development of Deep Learning-based applications. However, if the third-party providers are untrusted, they can inject poisoned samples into the datasets or embed backdoors in those models. Such an integrity breach can cause severe consequences, especially in safety- and security-critical applications. Various backdoor attack techniques have been proposed for higher effectiveness and stealthiness. Unfortunately, existing defense solutions are not practical to thwart those attacks in a comprehensive way. In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models’ robustness. An evaluation framework is introduced to achieve this goal. Specifically, we consider a unified defense solution, which (1) adopts a data augmentation policy to fine-tune the infected model and eliminate the effects of the embedded backdoor; (2) uses another augmentation policy to preprocess input samples and invalidate the triggers during inference. We propose a systematic approach to discover the optimal policies for defending against different backdoor attacks by comprehensively evaluating 71 state-of-the-art data augmentation functions. Extensive experiments show that our identified policy can effectively mitigate eight different kinds of backdoor attacks and outperform five existing defense methods. We envision this framework can be a good benchmark tool to advance future DNN backdoor studies.

 

Watermarking has become the tendency in protecting the intellectual property of DNN models. Recent works, from the adversary’s perspective, attempted to subvert watermarking mechanisms by designing watermark removal attacks. However, these attacks mainly adopted sophisticated fine-tuning techniques, which have certain fatal drawbacks or unrealistic assumptions. In this paper, we propose a novel watermark removal attack from a different perspective. Instead of just fine-tuning the watermarked models, we design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations, which can effectively and blindly destroy the memorization of watermarked models to the watermark samples. We also introduce a lightweight fine-tuning strategy to preserve the model performance. Our solution requires much less resource or knowledge about the watermarking scheme than prior works. Extensive experimental results indicate that our attack can bypass state-of-the-art watermarking solutions with very high success rates. Based on our attack, we propose watermark augmentation techniques to enhance the robustness of existing watermarks.

 

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results. Such variability has been a potential risk for systems in real-life equipped DNNs as core components. Numerous efforts have been put into research on how to protect DNN models from being tackled by AEs. However, no previous work can efficiently reduce the effects caused by novel adversarial attacks and be compatible with real-life constraints at the same time. In this paper, we focus on developing a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints. From basic affine transformations, we integrate three transformations with randomized coefficients that fine-tuned respecting the amount of change to the defended sample. Comparing to 4 state-of-art defense methods published in top-tier AI conferences in the past two years, our method demonstrates outstanding robustness and efficiency. It is worth highlighting that, our model can withstand advanced adaptive attack, namely BPDA with 50 rounds, and still helps the target model maintain an accuracy around 80%, meanwhile constraining the attack success rate to almost zero.

 

Accurate network traffic classification is of urgent need in the big data era, as the anomalous network traffic becomes formidable to classify in the nowadays complicated network environment. Deep Learning (DL) techniques can master in detecting anomalous data due to the capability of fitting training data. However, this capability lay on the correctness of the training data, which also made them sensitive to annotation errors. We propose that by measuring the uncertainty of the model, annotation errors can be accurately corrected for classifying network traffic. We use dropout to approximate the prior distribution and calculate Mutual Information (MI) and Softmax Variance (SV) of the output. In this paper, we present a framework named Uncertainty Based Annotation Error Correction(UAEC) based on both MI and SV, whose accuracy outperforms other proposed methods. By modifying the labels of a public dataset, a real-life annotation scenario is simulated. Based on the regenerated dataset, we compare the detection effectiveness of Euclidean Distance, MI, SV, and UAEC. As demonstrated in the experiment, by using UAEC, an averaging 47.92% increase in the detection accuracy is attained.

 

Due to the outstanding performance on the feature extraction and classification, Deep Learning (DL) models have been developed in many existing network systems. Nowadays, the DL can be used for cyber security systems such as building the detection system for the malicious Uniform Resource Locator (URL) links. For the practical usage, the DL-based models are proved to have better accuracy and efficiency on detecting malicious URL links in the current networking systems. However, some DL models are vulnerable to the subtle change of inputs such as the Adversarial Example (AE) which also exists in the URL detection scenario: the malicious URL links can bypass the DL-based detection with a crafty change to threat the security of the network systems. In this paper, we present an AE generation method against DL-Based web Uniform Resource Locator (URL) detection system by generating AEs. We could generate AEs with minimum changes (one byte in the URL) in the inputs to bypass the DL-based URL classification model with a high success rate.

 

Due to the outstanding performance on the feature extraction and classification, Deep Learning (DL) models have been developed in many existing network systems. Nowadays, the DL can be used for cyber security systems such as building the detection system for the malicious Uniform Resource Locator (URL) links. For the practical usage, the DL-based models are proved to have better accuracy and efficiency on detecting malicious URL links in the current networking systems. However, some DL models are vulnerable to the subtle change of inputs such as the Adversarial Example (AE) which also exists in the URL detection scenario: the malicious URL links can bypass the DL-based detection with a crafty change to threat the security of the network systems. In this paper, we present an AE generation method against DL-Based web Uniform Resource Locator (URL) detection system by generating AEs. We could generate AEs with minimum changes (one byte in the URL) in the inputs to bypass the DL-based URL classification model with a high success rate.

 

 

 

Past few years have witnessed the great development of Optical Circuit Switches (OCSes), which support high and dynamic bandwidth at low per-bit overhead in the data center network. Although OCSes deliver variety of attractive properties, the challenge still remains due to the unneglectable reconfiguration delay of OCSes. Previous OCS-based proposals amortize the long reconfiguration overhead by reconfiguring OCSes at intervals of a few 100s of milliseconds, which inevitably incurs a mass of packets waiting during reconfiguration intervals, hence the degradation of network performance happens. In this paper, we propose an OCS-based scheduling scheme named Time-Division Scheduling (TDS), which groups OCSes and reconfigures different groups in sequential time slots, instead of periodically reconfiguring all OCSes. TDS diminishes the impact of the long reconfiguration delay with mere directly software modifications. We evaluate the performance of TDS via OMNET++. The experimental results show that TDS delivers significant performance enhancements in latency and benefits slightly on the throughput.
 

 

DL Based classifiers can attain a higher accuracy with less storage requirement, which suits perfectly with the VANET. However, it has been proved that DL models suffer from crafted perturbation data, a small amount of such can misguide the classifier, thus backdoors can be created for malicious reasons. This paper studies such a causative attack in the VANET. We present a perturbation-based causative attack which targets at the supply chain of DL classifiers in the VANET. We first train a classifier using VANET simulated data which meets the standard accuracy for identifying malicious traffic in the VANET. Then, we elaborate on the effectiveness of our presented attack scheme on this pre-trained classifier. We also explore some feasible approaches to ease the outcome brought by our attack. Experimental results show that the scheme can cause the target DL model a 10.52% drop in accuracy.
 
With the rapid development in smart vehicles, the security and privacy issues of the Vehicular Ad-hoc Network (VANET) have drawn significant attention. Devices in an On-Board Unit (OBU) access to the internet through the Vehicular Communication Module (VCM), hence a real-time and accurate intrusion detection method is favored to be applied in VCM. In this paper, we present a Deep Learning (DL) based end-to-end intrusion detection method to automatically detect malware traffic for OBUs. Different from previous intrusion detection methods, our proposed method only requires raw traffic instead of private information features extracted by the human. The performance is compared with previous methods on a public dataset and a simulated real-life VANET dataset. Experimental results show that our method can attain a higher performance with a lower resources requirement.
 
Network virtualization facilitates the deployment of diversified services and flexible resource management in elastic optical networks (EONs). However, due to the explosive growth of traffic, considerable energy consumption and spectrum usage have restricted the sustainable development of cloud services. This paper addresses joint energy and spectrum efficient problem for virtual optical network embedding (VONE) over EONs. We propose a heuristic algorithm to improve energy and spectrum efficiency while keeping a high acceptance rate. With consideration of factors influencing energy and spectrum efficiency, a feasible shortest path is preferred; meanwhile, an appropriate modulation format is dynamically selected according to transmission distance and the trade-off between energy and spectrum consumption. To improve the acceptance rate of virtual network requests, a dual mapping is employed to reinforce the embedding process by multi-dimensional resources integrated mapping. The simulation results show that the proposed algorithm can achieve a joint energy and spectrum efficiency with a much lower blocking probability compared with the baseline approach.
 
Vehicular Ad-hoc Network (VANET) is a heterogeneous network of resource-constrained nodes such as smart vehicles and Road Side Units (RSUs) communicating in a high mobility environment. Concerning the potentially malicious misbehaves in VANETs, real-time and robust intrusion detection methods are required. In this paper, we present a novel Machine Learning (ML) based intrusion detection methods to automatically detect intruders globally and locally in VANETs. Compared to previous Intrusion Detection methods, our method is more robust to the environmental changes that are typical in VANETs, especially when intruders overtake senior units like RSUs and Cluster Heads (CHs). The experimental results show that our approach can outperform previous work significantly when vulnerable RSUs exist.
 

JOURNALS

Many backdoor removal techniques in machine learning models require clean in-distribution data, which may not always be available due to proprietary datasets. Model inversion techniques, often considered privacy threats, can reconstruct realistic training samples, potentially eliminating the need for in-distribution data. Prior attempts to combine backdoor removal and model inversion yielded limited results. Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples’ properties, perceptual similarity, and the potential presence of backdoor triggers.


We establish that relying solely on perceptual similarity is insufficient for robust defenses, and the stability of model predictions in response to input and parameter perturbations is also crucial. To tackle this, we introduce a novel bi-level optimization-based framework for model inversion, promoting stability and visual quality. Interestingly, we discover that reconstructed samples from a pre-trained generator’s latent space are backdoor-free, even when utilizing signals from a backdoored model. We provide a theoretical analysis to support this finding. Our evaluation demonstrates that our stabilized model inversion technique achieves state-of-the-art backdoor removal performance without clean in-distribution data, matching or surpassing performance using the same amount of clean samples.

Deep Neural Networks (DNNs) are currently widely used for high-stakes decision-making in the 5G-enabled Industrial Internet of Things (IIoT) systems, such as controlling access to high-security areas, autonomous driving, etc. Despite DNNs’ ability to provide fast, accurate predictions, previous work has revealed that DNNs are vulnerable to backdoor attacks, which cause models to perform abnormally on inputs with predefined triggers. Backdoor triggers are difficult to detect because they are intentionally made inconspicuous to human observers. Furthermore, privacy protocols of DNNs in IIoT edges and rapidlychanging ambient environments in 5G-enabled mobile edges raise new challenges for building an effective backdoor detector in 5G-enabled IIoT systems. While there is ample literature on backdoor detection, the implications of IIoT systems’ deployment of DNNs to backdoor detection have yet to study. This paper presents an adaptive, lightweight backdoor detector suitable for being deployed on 5G-enabled IIoT edges. Our detector leverages the frequency artifacts of backdoor triggers. Our model can work without prior knowledge of the attack pattern and model details upon successfully modeling the triggered samples in the frequency domain. Thus, prevent disrupting DNN’s intellectual protocols in IIoT edges. We propose a supervised framework that can automatically tailor the detector to the changing environment. We propose to generate training data for potentially unknown triggers by random perturbations. We focus on DNN-based facial recognition as a concrete application in 5G-enabled IIoT systems to evaluate our proposed framework and experiment on three different optical environments for two standard face datasets. Our results demonstrate that the proposed framework can improve the previous detection method’s worstcase detection rate by 74.33% and 84.40%, respectively, on the PubFig dataset and the CelebA dataset under attack and target model agnostic settings.

 

Deep Neural Networks are well-known to be vulnerable to Adversarial Examples. Recently, advanced gradient-based attacks were proposed (e.g., BPDA and EOT), which can significantly increase the difficulty and complexity of designing effective defenses. In this paper, we present a study towards the opportunity of mitigating those powerful attacks with only pre-processing operations. We make the following two contributions. First, we perform an in-depth analysis of those attacks and summarize three fundamental properties that a good defense solution should have. Second, we design a lightweight preprocessing function with these properties and the capability of preserving the model’s usability and robustness against these threats. Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.

 

Elastic optical network has recently been deemed as a promising infrastructure to support the ever-increasing bandwidth demand of emerging applications, due to its high transmission rate, fine-grained and flexible spectrum allocation. With the explosive growth of traffic, optimizing energy and spectrum efficiency has become a critical issue for green elastic optical networks, which is closely related to the sustainable development of cloud services. This paper focuses on joint optimization of energy and spectrum efficiency for virtual optical network embedding in elastic optical networks. A heuristic algorithm, termed ESE, is presented to improve energy and spectrum efficiency while keeping a high acceptance rate. With consideration of factors influencing energy and spectrum efficiency, a feasible shortest path is preferred; meanwhile, an appropriate modulation format is dynamically selected according to transmission distance and the trade-off between energy and spectrum consumption. To improve the acceptance rate of virtual network requests, a dual mapping is employed to reinforce the embedding process by multi-dimensional resources integrated mapping. The simulation results show that the proposed algorithm can achieve a joint energy and spectrum efficiency with a much lower blocking probability compared with two baseline algorithms.

 

With the rapid evolution of network traffic diversity, the understanding of network traffic has become more pivotal and more formidable. Previously, traffic classification and intrusion detection require a burdensome analyzing of various traffic features and attack-related characteristics by experts, and even, private information might be required. However, due to the outdated features labeling and privacy protocols, the existing approaches may not fit with the characteristics of the changing network environment anymore. In this paper, we present a light-weight framework with the aid of deep learning for encrypted traffic classification and intrusion detection, termed as deep-full-range (DFR). Thanks to deep learning, DFR is able to learn from raw traffic without manual intervention and private information. In such a framework, our proposed algorithms are compared with other state-of-the-art methods using two public datasets. The experimental results show that our framework not only can outperform the state-of-the-art methods by averaging 13.49% on encrypted traffic classification’s F1 score and by averaging 12.15% on intrusion detection’s F1 score but also require much lesser storage resource requirement.

 

BOOK

Engineering and science research can be difficult for beginners because scientific research is fraught with constraints and disciplines. Research and Technical Writing for Science and Engineering breakdowns the entire process of conducting engineering and scientific research. This book covers those fascinating guidelines and topics on conducting research, as well as how to better interact with your advisor. Key Features: advice on conducting a literature review, conducting experiments, and writing a good paper summarizing your findings. provides a tutorial on how to increase the impact of research and how to manage research resources. By reflecting on the cases discussed in this book, readers will be able to identify specific situations or dilemmas in their own lives, as the authors provide comprehensive suggestions based on their own experiences.

 

SERVICE

I have reviewed CVPR’23,22 (Outstanding Reviewer), NeurIPS’22, ICML’23,22, ICCV’23, ECCV’22, AAAI’22, KSEM’22, ’21, EUC’21, IEEE ISPA’21, ICA3PP’20.

I am also the reviewer of IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS, IF: 14.255), IEEE Transactions on Dependable and Secure Computing (IEEE TDSC, IF: 6.791), IEEE Transactions on Industrial Informatics (IEEE TII, IF: 10.215), and Vehicular Communications (VEHCOM, IF: 8.373).

I am also the leading Chair member of the IEEE Trojan Removal Competition (IEEE TRC’22) and the Industry Chair and Publicity Chair of IEEE International Conference on Intelligent Data and Security (IEEE IDS’22).

Recent Posters

Previous
Next

Recent Presentations