Adding Images to a CTU Post

I wanted to make a quick post for anyone attending Colorado Technical University (CTU) that would like to know the “trick” to add images directly into your discussion board posts. I use the word “trick” lightly since it is really just editing a bit of HTML code. It should be noted that this will require you to host your image someplace online so you can reference the image from your post. In my case, I just host the images on my site, but really any Internet-accessible repository will due.

The following steps may help other people at various universities, but I can only say that I have tested it successfully at CTU.

Step 1. Write your post
Write your discussion board posts as normal and past the text into the online form. After the text is posted, I like to enter in a keyword that I can quickly identify within the HTML code, such as IMAGEHERE.

Step 2. Edit the HTML
Next, find the button along the toolbar that looks like a less than symbol, a forward slash, and a greater than symbol. This button allows you to edit the HTML code of your post. Once you are editing the HTML, find the keyword you placed in your post.

Step 3. Add the Image Tag
Now, you will want to replace your keyword with the image tag in the HTML. You can get away with just using the src attribute within the image tag to directly load the image. If you feel like being more creative, I would recommend you check out W3Schools for HTML examples (https://www.w3schools.com/html/html_images.asp).

Step 4. View your Image
Finally, click on the </> button again to go back to the standard editing mode. In normal mode, you will be able to view your image before you submit your post. You can go back into the HTML editor and tweak anything you would like before you post to the discussion board. If you are a CTU student, you most likely already know that once you post, you cannot edit it. So I highly recommend you check the image prior to posting.

That’s it! You have successfully edited the HTML code of your discussion board post to display the image inline. I hope this helps!

– Michael

IDS/IPS Considerations

Network security is an important consideration when working with enterprise security. Using an intrusion detection system (IDS) or an intrusion prevention system (IPS) can aid in detecting and blocking network attacks. A typical installation of an IDS/IPS system should include sensors to collect information, a management server, and a management console that can be used to view information (Longe Olumide, Lawal, & Ibitola, 2014). The placement of the sensors is a critical design step that needs upfront thought and consideration. The first thing to consider with sensor placement is to determine what the overall goal is for the IDS/IPS system (Longe Olumide et al., 2014).

In a traditional network environment, an IDS or IPS would be placed in-line with all egress points to the Internet (Longe Olumide et al., 2014). Besides connections to the Internet, direct links to partner organizations should also have an IDS/IPS sensor to monitor inbound and outbound traffic (Longe Olumide et al., 2014). Another consideration is remote employees that could be connecting into the enterprise from an unknown network. The point that remote employees come into the network should be treated as an unknown source, and the traffic should be monitored with an IDS/IPS sensor.

Moving beyond brick and mortar networks, many organizations now host systems within cloud environments (Sakr, Tawfeeq, & El-Sisi, 2019). Although there are multiple different ways to deploy an IDS/IPS system within a cloud environment, one common technique is to use a host-based IDS/IPS sensor (Sakr et al., 2019). Along with placing an IDS/IPS sensor at the connection to the Internet, a host-based sensor can help monitor traffic between each host. Much like a traditional network, it is important to monitor traffic in and out of sensitive systems. A cloud environment is commonly used for sensitive systems only, so having a sensor on each host would not be considered excessive.

Regardless of the environment is a more traditional layout or cloud-based, it will be essential to determine what technique will be used to detect attacks. Most IDS/IPS systems use signature-based or anomaly-based technologies to determine when an attack is taking place (Sakr et al., 2019). It is difficult to say if signature-based or anomaly-based systems are more effective at detecting attacks, which is why it is common to utilize both techniques in a hybrid-based approach (Sakr et al., 2019).

It is difficult to give a generic answer to how an IDS/IPS system should be set up in an organization without first understanding where the sensitive information is stored (Longe Olumide et al., 2014). It is safe to state that an IDS/IPS sensor should be placed on the Internet connection, connections between partners, and on the access, the point used by remote workers. It is also important to remember to consider an IDS/IPS system for cloud-based environments as well as the traditional networks within your organization.

References

Longe Olumide, B., Lawal, B., & Ibitola, A. (2014). Strategic Sensor Placement for Intrusion Detection in Network-Based IDS. International Journal of Intelligent Systems and Applications, 6(2), 61-68. doi:10.5815/ijisa.2014.02.08

Sakr, M. M., Tawfeeq, M. A., & El-Sisi, A. B. (2019). Network Intrusion Detection System based PSO-SVM for Cloud Computing. International Journal of Computer Network and Information Security, 10(3), 22. doi:10.5815/ijcnis.2019.03.04

Motion Sensor Lights

This is a short post to follow up on a video we made showing how to add a motion sensor to a strip of LEDs so they will automatically turn on for you. Since the video, I have mounted the components under the shelf and found that the system works quite well.

The stairs into the lab are an approvement 14 feet away from the sensor. I have the sensor set to the maximum sensitivity setting and the LED strip turns on as soon as I walk down the stairs. These lights also turn on prior to the motion lights I have on the ceiling which were purchased.

The hardest part of the installation was laying on my back so I could work under the shelf. If I was going to install these again I would turn the shelf over so I could have looked down on the shelf and done a better job. Since I had already installed the shelf I did not feel the need to pull them out to install the lights, but hindsight is always 20/20. Another factor is that I installed this sensor in my lab and I was not overly concerned about making the wiring look nice. You can not see the wires when you stand in front of the shelf, and that was really all I cared about.

As I said, the install is not much of a “looker”, but it gets the job done.

The circuit is not complex and consists of the following parts:

Voltage regulator – LM2596
Relay – Keyes_Relay
Motion sensor – HC-SR501
Arduino – Nano
Power supply – 12v



The Arduino code is simple, but we posted on GitHub if you would like to look at it.

I hope this helps someone out there.

– Michael

RFID Implants 101

Maybe you have heard about an implantable radio frequency identification (RFID) chip but you are not sure what it is. In that case, you came to the right place. In this post, I will look at what RFID is and what the different types of implants are and what they can be used for.

what is RFID?
RFID is a general term used to talk about devices that can be accessed wirelessly (using radio frequency) to read the contents of a tag. RFID can be chopped into different sub-categories. We will look at the following three in this post.

Low Frequency (LF) between 125-134 kHz
High Frequency (HF) at 13.56 MHz
Ultra-High Frequency (UHF) between 856-960 MHz



Each sub-category of RFID has different advantages and disadvantages that you would want to look into depending on your needs. For this post, I will be focusing on LF and HF, since this is the most common types of implantable chips.

What we normally refer to as an “RFID tag” is in the LF part of the spectrum around 125 kHz. A Near Field Communication (NFC) tag operates in the HF part of the spectrum at 13.56 MHz.

What makes up an RFID System?
An RFID system is comprised of two main parts, a reader and a tag. A common example is a wall-mounted unit you might see at your office when you enter the building. Most organizations in the United States utilize the HID access card system and provide a small white plastic card to their employees. The card is scanned by the reader when the employee wants to use the door.

The typical range for an LF or HF tag is less than three feet. In practice, I have observed a range of a few inches. The implantable RFID tags that are on the market today are LF and HF tags and have an even shorter range.

It is worth noting that UHF tags can have a read distance of up to 328 feet (100 meters). To accomplish the longer read distances, the tag is typically larger and is powered by a battery. To the best of my research ability, I am not able to locate an implantable UHF tag.

What is an RFID Implant?
An RFID implant is typically housed in a small glass capsule that can be implanted under the skin using a syringe. Depending on the chip you select, they can range in size between 11-13 mm long and 2 mm in diameter.



I personally have obtained my RFID implants from Dangerous Things and Cyberise. Both sites sell RFID implant kits that come with the chip already in a sterilized injection, gloves, and other items needed during the procedure.

It has been my experience that it takes a few days for the implant to be usable. This is due to the irritation of the tissue caused by the needle during the implant process. That should give you some indication of how sensitive these chips are if a small amount of irritation to the surrounding tissue can cause the implant to not function correctly.

Why get an implant?
Getting an implant is definitely a personal decision. I purchased my first implant along with a friend who was interested in the process. Since my first implant in 2016, I have recieved three more for a total of four chips. Each chip serves a different purpose.

The first implant is used to replace my HID door access card to my office so I do not need to carry around a badge all day. The others are NFC chips. As we talked about earlier, NFC is a sub-set of RFID. One NFC is a Vivo key which can be used as an online authentication token. The other two NFC chips are for the storage of data. Each chip can old 1,868 Bytes. When I purchased the NFC storage chips in 2018/2019 they were the larges capacity chips on the market. I think it will be some time before we are able to carry large amounts of data on a chip implanted in our hands.

-Michael

Augmented Reality in Vehicles

Introduction

Having the ability to obtain and process important information while driving a vehicle is critical to safety. Most vehicles today are equipped with a global positioning system (GPS) to help make navigation easier. Technology such as a heads-up display (HUD) can display the GPS data on the windshield so that the driver can keep their eyes on the road.

This paper explores the innovation of expanding on current technology to build an augmented reality (AR) system to help improve driver and pedestrian safety in and around the roadways. Research has already been performed to combine many different technologies to make driving safer, but there are still accidents the cost of human lives every year. The goal of this innovation is to provide crucial information to the driver as it is needed so that the number of accidents can be reduced.

Scope

Systems that utilized deep learning, or machine learning, are already in use to recognize and process information quicker than humans (Abdi & Meddeb, 2018). Despite the advancements, over the last several years in computing power, we are still a way off from a fully autonomous vehicle. The deep learning techniques could be used in an AR system to help relay critical information to a driver (Abdi & Meddeb, 2018). Sensors such as ultrasonic and night vision could be used to relay information to the driver iv an AR system, so they are aware of unseen issues ahead. Providing crucial information to the driver when conditions are not ideal can help avoid accidents while on the road.

Improving visibility is the first step in making vehicles safer. Systems such as automatic braking have already been invented and are in use in cars today. AR systems could be used to help predict if somebody was going to step into the roadway or anticipate if a car was going to make a lane change or stop suddenly. The idea is not to create a fully autonomous vehicle, but rather have the computer systems provide critical information to the driver so they are able to make the decision.

One of the main decisions that need to be made when driving revolves around navigation and the majority of drivers today use a GPS to help make those decisions. An AR system could enhance the navigation experience for drivers. Drivers would be able to see information such as which exit to take overplayed on the roadway giving the driver clear indication on how to navigate. This would help also reduce the risk of accidents due to inattentive driving while looking at a GPS or a map.

However, there are risks to utilizing an AR system. Accidents have happened while people have been using AR in the past. While playing Pokémon Go, a young man fell onto an electric railroad track and obtained serious injury throughout his body (Kate Gemma, Kai Yuen, & Khan, 2018). Ultimately he required amputation of one of his legs due to the injury (Kate Gemma et al., 2018). Other causes of injury have also been reported over the years. Cases such as this demonstrate that technology can be distracting when it is not used properly. An AR vehicle system would need to be thoroughly tested to make sure it would not be considered distracting to the driver.

Purpose

The overall goal of having an AR system in a vehicle is to improve safety for the occupants of the vehicle and pedestrians alike. By providing the driver with real-time information about the surrounding conditions, such as a pedestrian walking into the roadway, the driver would have more time to react and avoid the accident. A secondary goal of the AR system is to aid in navigation. Navigating an unfamiliar city can be hazardous to the driver and the people around them. An AR system could be used to overlay the correct directions over the real-world streets the driver is able to see out of the windshield.

Supporting forces

One supporting force is the current state of the liquid crystal on silicon (LCoS) display technology. In the past few years, LCoS panels have achieved a resolution of 4K2K, and research is underway for 8K4K resolution panels (Huang, Engle, Chen, & Wu, 2018). The panels also have a sub-millisecond response time for intensity modulation (Huang et al., 2018). By increasing the resolution of the panels, an AR system would appear more realistic and aid in user adoption.

Along with the LCoS panels, technology has increased around the various types of sensors that can be added to vehicles. New types of visual sensors have been developed that can be used to aid an AR system in correctly detecting obstacles (Abdi & Meddeb, 2018). The idea of merging technology with the human driver is being referred to as “cooperative driving” (Abdi & Meddeb, 2018). Having a cooperative driving system can be seen as a pathway to fully-autonomies driving vehicles.

Challenging forces

One challenging force would be user acceptance of an AR system within a vehicle. Older drivers could easily read and interpret information for a standard vehicle dashboard but demonstrated difficulty when needing to read a dashboard and follow navigation directions (Kim & Dey, 2016). Younger drivers did not exhibit the same difficulty when asked to perform similar tasks (Kim & Dey, 2016). However, the majority of the safety advantages will come from older drivers utilizing an AR system to help avoid accidents.

Method

Like most research dealing with new technology, there are a lot of unanswered questions. To help answer these questions, experts in both the technology and phycology of humans should be consulted to understand better the feasibility of an AR system being adopted. After examining different types of methods, it was determined to use the Delphi method would be suitable to gain insight from various experts (Haughey, nd). The Delphi method is used to gather the thoughts on a question of different experts anonymously (Haughey, nd). These thoughts are collected, combined, and then share back with the group of experts (Haughey, nd). This process is repeated until a consensus is reached by the various experts to answer the proposed question (Haughey, nd).

The Delphi method was selected because it can be used over an extended period of time and does not require the experts to be in the same physical space. Without a time constraint, experts have the ability to research and think about their answers to the proposed question. By not requiring people to be in the same physical location, experts from around would be able to participate in the process. These factors should increase the accuracy of the predictions made by experts.

References

Abdi, L., & Meddeb, A. (2018). Driver information system: A combination of augmented reality, deep learning and vehicular Ad-hoc networks. Multimedia Tools and Applications, 77(12), 14673-14703. doi:10.1007/s11042-017-5054-6

Haughey, D. (nd). DELPHI TECHNIQUE A STEP-BY-STEP GUIDE. Retrieved from https://www.projectsmart.co.uk/delphi-technique-a-step-by-step-guide.php

Huang, Y., Engle, L., Chen, R., & Wu, S.-T. (2018). Liquid-Crystal-on-Silicon for Augmented Reality Displays. Applied Sciences, 8(12). doi:http://dx.doi.org/10.3390/app8122366

Kate Gemma, R., Kai Yuen, W., & Khan, M. (2018). Augmented reality game-related injury. BMJ Case Reports, 11(1). doi:10.1136/bcr-2017-224012

Kim, S., & Dey, A. K. (2016). Augmenting human senses to improve the user experience in cars: applying augmented reality and haptics approaches to reduce cognitive distances. Multimedia Tools and Applications, 75(16), 9587-9607. doi:10.1007/s11042-015-2712-4