(Volume: 2, Issue: 2)
Upgrade Image Quality through Multi-focus Image Fusion
An accurate diagnosis of a disease from a medical image examination, the recognition of a target without delusion in robotic as well as other computer vision-based applications and the precise evaluation of images from different sensors in remote sensing strongly relies on one thing- “The image quality”. A quality-degraded image with no sharp details can render no favour to any application deploying it. So, what can be done to improve the quality of the image scene? The promising way is the image fusion. As the name indicates, image fusion clubs together the features from various images of the same scene to produce an image with more finer details. Remember that images of same scene only yield image quality improvement. Images of different scene can again question the quality!!! The next query that may arise in one’s mind is whether a higher visual perception of the image possible? Certainly yes, when the fused images of a scene are of various focal depths, giving rise to multi-focus image fusion at pixel-level, feature-level or decision-level. Since 2007, a number of researchers have shown rigorous interest on multi-focus image fusion to support the image-based biomedical and computer vision applications. In 2021, Vineeta Singh and Vandana Dixit Kaushik of Harcourt Butler Technical University, Kanpur, India have come up with an improved multi-focus image fusion approach, employing Renyi entropy and Atom Search Sine Cosine Algorithm (ASSCA) for choosing an optimal fusion coefficient. Utilizing the Lytro Multi-focus Image Dataset, their work published in Signal Image and Video Processing (Vol. 15), Springer, can be found to yield a maximal Mutual Information of 1.492, maximal Peak Signal-To-Noise Ratio of 40.625 dB, minimal Root Mean-Squared Error of 7.651, maximum Correlation Coefficient of 0.988 and minimum deviation index of 1.146. The researchers have pointed out that their search on the ways to eliminate noise, the image blurriness computation and the aspects leading to degrading multi-focus image fusion only motivated them to put forth the approach. These factors can again be a motivation for future researchers, who intend for supreme image quality, so as to extend extreme support to the image-based applications of this digital era. Image Credit: www.freepik.com
Secure Authentication- The desire of IoT Users
It is not an astonishment in this world of internet to see objects or people being monitored and controlled, even from remote locations at any time. Internet of Things (IoT) is the paradigm behind, entangling all the physical objects with the computing devices using sensors and actuators, which then establish communication and control through wireless or cloud- based networks. Nowadays, the cloud-based interconnection of IoT devices is the most-desired for the following reason- Cloud can ensure software, platform, infrastructure or storage as a service on a “pay as you use” means for high volume of communication data!!! Though it may sound simple, here comes the difficulty…An intruder can also just pay and use one’s data in illicit means, since it is hard to authenticate the user from the massive data communications handled within and out of the cloud. So, security-improved user authentication approach become a necessity in IoT-based applications, provided that the network complexity, computational time as well as cost are within specified limits. A Hybrid and Adaptive Cryptographic (HAC)-based secure authentication framework, involving setup, registration, login, authentication, attack validation and password update phases is one such approach presented by Kavitha S. Patil, Dr. Indrajit Mandal and Dr. C. Rangaswamy in Pervasive and Mobile Computing, Elsevier. The researchers confirmed the security by using two different hybrid encryption schemes, one at the login phase and the other at the attack validation phase, offering a minimum communication cost, less computation time and minimum memory usage of 0.017s, 0.060s and 2.502MB, respectively. “The future dimension of the work would be to achieve more security by enhancing the authentication framework without any computational complexity”, the researchers state in their publication. Currently, IoT networks ease and master communication and control in almost all fields, for instance, transportation, mHealth, agriculture, industries and smart home. Hence, dwelling into research on improving security in IoT environments has boundless scope for young researchers.
Image Credit: www.freepik.com
Solar PVs to combat global warming in future- Real-time fault diagnosis becomes mandate
The sooner replenishing rates than the consumption rates, the ability to cut down greenhouse gas emissions and the easiness of installation have all led the incessantly growing population to go behind “Solar energy- based power generation through Photovoltaic cells (PV)”. Do you know that the G7 committee on Climate, Energy and Environment, April 2023, demands 1 TW of Solar PV in combination with 150 GW offshore wind by 2030, so as to maintain the global temperature rise at 1.5°C? Since efficiency of PV systems serves as the key driver to attain this goal, the faults in PV systems need careful monitoring for rapid detection and evacuation. Priorly, the fault diagnosis in PV systems because of manufacturing defects or wear and tear were achieved through visual inspection, infrared thermography and electrical methods. However, the inefficient nature of the former two approaches on application to small-scale PVs have paved way to the adoption of electrical methods and automated fault diagnosis using machine learning approaches in later years. Usually, receiving the weather measurements like temperature, irradiance etc from dedicated sensors and high frequency current or voltage measurements, the automated machine learning approaches enable fault identification. But, higher the widespread installation of sensors, higher will be the cost and smaller will be the number of PV systems being monitored for fault. Three researchers from Belgium have taken initiatives to counteract this issue by presenting an article in Applied Energy, Elsevier (vol. 305). In their fault diagnosis procedure, the researchers have applied 24- hours satellite data and low-frequency inverter measurements, which were cost-efficient than using on-site sensors and high current or voltage measurements, to a recurrent neural network. The researchers have not stopped with the finding of six fault types. Besides, they have estimated the output power loss to know the severity of the fault. Renewable energy systems, especially the solar-based ones, will be at the peak of usage by 2050. Hence, research on real-time and simultaneous multiple fault detection has a thriving scope…
Image Courtesy: www.pexels.com
E-nose detects harmful molecules to stay healthy and safe
Is it not a fiction, if a disease is diagnosed by the smell that the related organ releases? If a novel medicine or healthcare cosmetic is identified to be venomous or not by the odor it gives? If the food to be intook is spotted out to be fresh or rotten by its flavor? If the explosive presence is detected by the odor of the molecules which constitute it? If the waste can be classified as biodegradable or not only with the pungent smell that it gives? But this fiction came to reality with the emergence of e-nose, the electronic nose. The e-nose is a product of Artificial Intelligence (AI), mimicking the human olfactory system to sense all sorts of odor or aroma, achieving varied classification tasks with greater rapidity and effectiveness by analyzing the molecules forming the odor. It is the sample delivering system, the detection system composed of numerous sensor arrays and the computing system powered by pattern recognition or machine learning approaches that frame the e-nose architecture. Once the sample is delivered, individually operating sensors work on it to analyse the molecules contained it and the computing system records the associated responses and completes the classification task. Now the dilemma is: How many sensors and what kind of sensors are required? The confusion arises because different samples have different odors with different molecule compositions. Various sensors to be used in e-nose include, Metal Oxide Semiconductor (MOSFET), conducting polymers, Piezoelectric sensors, Quartz crystal microbalance and metal oxide sensors. However, the choice of sensor types and their number needs to be optimized to meet the cost and the complexity. Four researchers, three from Indonesia and one from United Kingdom, have deployed e-nose for beef quality assessment, as per Sensing and Bio-Sensing Research. Employing the dataset from Harvard Dataverse, their e-nose signal processing involved ensemble machine learning approach and sensor array optimization to classify the quality of beef as excellent, good, acceptable and spoiled. Despite using varied sensors in large counts, their optimized count of sensors using ensemble approaches yielded no performance degradation. Since the current age walks behind health and survival, research on e-nose signal processing can assist amazing applications in food and beverage industry, medical industry, waste management field and other healthcare cosmetic manufacturers.
Image Credit: www.freepik.com
Machine learning approaches to aid ship speed prediction
The modern tech is always developed with an urge to operate with proper control over Greenhouse gas (GHG) emissions, energy efficiency and pollution. Ship design is in no way an exclusion, since 80% of global trade relies on shipping. The International Maritime Organization (IMO), a United Nations-based agency to ensure security, safety and pollution control of ships has taken measures to counterattack the emissions by setting the Energy Efficiency Design Index (EEDI) for new ship design and the Ship Energy Efficiency Management Plan (SEEMP) for all ships. Both these measures from IMO serve as the initiatives to cut down carbon emissions and to acquire high energy efficiency in ships. So, to improve energy efficiency, can alternative fuels with low GHG emissions be adopted? Can the non-renewable resource gadgets be replaced with renewable ones? Yes, of course. But the associated costs can exceed the merits offered!!! However, there is a very simple alternative- “Just predict the speed of ship to have control over it”. Predicting an optimal speed for the ships can reduce fuel consumption, leading to lower emissions and greater energy efficiency. Not only that, speed prediction also comes with added advantages- Inform the ports about the ship’s Expected Time of Arrival (ETA) or avoid collisions of ships taking a narrow voyage route. The combined research by Egyptian and UK researchers in Ocean Engineering, Elsevier entails ship speed prediction for efficient shipping operation under real operational conditions. Testing about fourteen machine learning approaches on the existing publicly available sensor data from the domestic ferry the ’M/S Smyril’ operating around the Faroe Islands, the researchers state that their results could help the ship management companies in creating further advanced models for route optimization, ship tracking, voyage planning, etc. Researchers aiming to work on this theme should have an eye on the appropriate choice of machine learning approaches and operational parameters.
Image Credit: www.unsplash.com